Printer Friendly

Good Samaritans in cyberspace.

I. INTRODUCTION

One of the most salient and contentious issues associated with the fast-developing online industry is the liability of online service providers(1) for transmitting content created by others.(2) In an attempt to address part of the issue, Congress passed the Telecommunications Act of 1996,(3) which President Clinton signed into law on February 8, 1996. While the indecency portions of the Act have since been struck down as unconstitutional,(4) Section 230(c)(1) of the Act, known as the "Good Samaritan" Provision, remains in effect.(5) The Provision provides that "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information provider."(6) Thus, even if an online service provider screens some of the content on its system, thereby acting as a Good Samaritan, it cannot be subject to "publisher" defamation liability for content that it transmits but does not create, such as Internet content and subscriber-generated content.(7)

Congress included the Good Samaritan Provision in the Telecommunications Act to overrule Stratton Oakmont Inc. v. Prodigy Services Co.(8) as it might be applied to online service providers who block or screen "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable"(9) materials. In Stratton, an online service provider was subject to liability for defamatory material posted by another on the grounds that the provider exercised content control over its bulletin boards and was, therefore, considered a "publisher" rather than a "distributor" of the defamatory material.(10) Congress overruled Stratton because of its tendency to induce online providers to abandon all content control, which runs contrary to the Act's purpose of encouraging providers to screen and remove indecent matter from their systems.(11)

This Article will demonstrate that by protecting providers who restrict or screen objectionable materials from liability as publishers, the Good Samaritan Provision will not achieve its intended result. Although it foreclosed "publisher" liability for others' content, the Provision left the door open for online providers to be held liable as "distributors" if they have reason to know of defamatory material by virtue of content control efforts.(12) By failing to perceive the special implications for the online medium of the link between content control and online provider liability, the Good Samaritan Provision is likely to perpetuate Stratton's deleterious effect of inducing providers to relinquish control over their systems, thereby undermining the purpose of defamation law, severely burdening the online information industry, and impeding First Amendment interests in the free flow of information.

The second part of this Article proposes a revised form of the Good Samaritan Provision which addresses the problems outlined above by severing the link between content control and online provider liability. The proposed model for online defamation liability(13) is tailored to the unique features of the medium and is compatible with the interests of both the online industry and the public.

II. A NEW AGE OF COMMUNICATION

Since the advent of the telegraph, courts and legislators have tended to treat each new communications technology like the existing technology that it most closely resembles.(14) In the case of online service providers, initial attempts at regulating by analogy have been especially problematic, perhaps because it is not immediately clear which communications technology is most closely analogous. Online networks represent a revolutionary synthesis of several traditional communication media.(15) In offering their own content, online providers act much like a television station, newspaper, or magazine. By offering users e-mail to transmit private messages to other users, online services function similarly to the postal service. As the host of "chat" groups allowing simultaneous online discussions between two or more users, these services operate in the role of a telephone system.

The extent to which online service providers control content varies widely.(16) Like traditional media, online providers control and edit the information they generate or contract with others to provide. Although subscriber-generated content is more difficult to control, some providers use software which automatically deletes vulgar or offensive language as it is transmitted.(17) Most Of the larger commercial services also employ gatekeepers or "moderators" who: (1) review some or all incoming messages before they are posted online to determine whether they are related to the topic to which the forum is dedicated; or (2) screen out material which is profane or which otherwise does not conform to standards established by the service.(18) However, due to the exponential growth of online traffic, as well as the speed of transmission, comprehensive review of subscriber-based content is becoming less and less feasible, even by those providers with the greatest monitoring resources.(19)

Some aspects of online offerings defy any level of provider control. For instance, the very nature of live "chat" rooms prevents providers from prescreening such transmissions any more than a telephone company can screen telephone conversations.(20) Moreover, the linking feature which allows users to roam from network to network precludes the ability of the host network to monitor material accessed from another network.(21)

Another type of material which is difficult for providers to control is material obtained from the Internet and transmitted through their systems. A key feature of the Internet is remote information retrieval,(22) which allows a user to search and retrieve information located on remote computers anywhere in the world. With millions of users using remote information retrieval to roam from network to network every day, it is technically impossible for the host network to monitor or screen all the material accessed from other networks.(23) Content control over newsgroups using Usenet,(24) a distributed message database system of voluntary rules for passing and maintaining newsgroups from server to server, is also very limited. Most Usenet newsgroups are unmoderated and have no central hub from which editorial control can be exercised.(25) If a particular message on a Usenet newsgroup is defamatory, server administrators are generally limited to terminating subscriptions to that newsgroup.(26)

As online providers attempt to control the content transmitted by their systems, they face not only technical limitations, but legal limitations as well. For example, Chapter 119 of the Electronic Communications Privacy Act (ECPA)(27) prohibits the interception or disclosure of private electronic communications such as e-mail.(28) In addition, providers who offer third-party content as part of their service are sometimes prohibited by contract from editing or interfering with such content.(29)

The limitations faced by online service providers trying to control content generated or accessed by subscribers point to what is perhaps the most fundamental difference between cyberspace and traditional media forms: the transformation and empowerment of the user from a passive consumer to a producer of information.(30) The relationship between producer and user online is fluid and reversible due to the interactive nature of online communication.(31) In addition, online providers offer communication forums to a virtually limitless and diverse number of information providers and consumers. This stands in contrast to the traditional electronic mass media, which must restrict the numbers of potential information producers due to spectrum scarcity.(32) Finally, the online relationships between information producers and users are more direct than in traditional forms of mass communication because they are largely unmediated by gatekeepers.(33)

Thus, online service providers not only perform the tasks of many traditional communications media, such as the telephone and the post office, but also represent an entirely new medium with new legal challenges. As will be demonstrated, the provision of online services resists traditional centralized methods of legal regulation and calls for new ways of analyzing online service provider liability for transmitted content. Because this Article examines the liability issue in the context of defamation law, it is useful to begin with a brief overview of relevant defamation law principles.

III. THE PUBLISHER/DISTRIBUTOR DISTINCTION IN DEFAMATION LAW

The law of civil defamation serves "the public policy that individuals should be free to enjoy their reputations unimpaired by false and defamatory attacks."(34) While defamation law varies from state to state, general principles common to all jurisdictions may be elicited.(35)

An essential element of a defamation claim is "publication."(36) Publication is generally described as the intentional or negligent communication of the allegedly defamatory statement to a third person.(37) Under this standard, "publishers" are regarded as persons or entities exercising such extensive control over the content at issue--either by creating, editing, or reviewing the content--that knowledge of the defamation can be fairly imputed as a matter of law.(38) In other words, publishers are deemed to have a "reason to know"(39) of defamatory matter by virtue of their editorial control. Entities such as newspapers and book publishers have traditionally fallen into the publisher category.

The common law created a separate category of liability for "distributors," such as bookstores, libraries, and newsstands.(40) Unlike publishers, distributors are not presumed negligent; rather, they are subject to defamation liability only if it is proved that they "knew or had reason to know" of the defamation.(41) The notion that a distributor must be aware or have reason to know of the contents of a publication for liability to attach for distributing that publication is grounded in the First Amendment.(42) Because distributors cannot feasibly review, much less control, all of the content distributed, imposing defamation liability on distributors without proof of knowledge or reason to know would severely chill the dissemination of information.(43)

In sum, defamation law seeks to protect individuals from wrongfully damaged reputations, while insulating "innocent" defendants who did not know or have reason to know that they were contributing to such damage. This policy is expressed through separate standards of proof. Publishers are presumed to have reason to know of defamatory matter based on content control and are therefore liable without proof of reason to know. On the other hand, distributors are not presumed to have reason to know of defamatory matter because of their lack of content control. Therefore, proof of reason to know is required to impose liability on a distributor. Because traditional media forms easily fit into one category or the other, the publisher/distributor distinction was a convenient tool for determining the standard of proof of the publication element in a defamation action.

With these principles in mind, two important online defamation cases will be analyzed in order to demonstrate how: (1) the publisher/distributor distinction is not readily adaptable to the online medium; and (2) the publisher/distributor application in the online context effectively undermines defamation law principles, First Amendment interests, and the viability of the online industry.

IV. THE CUBBY AND STRATTON DECISIONS

In the 1991 case of Cubby, Inc. v. CompuServe Inc.,(44) the District Court for the Southern District of New York held that CompuServe, one of the nation's largest online service providers, could not be held liable for defamation claims based on statements posted by a subscriber on one of CompuServe's special interest forums.(45)

The court in Cubby framed the court's analysis in terms of the publisher/distributor distinction.(46) The Cubby court likened CompuServe to a distributor of information, such as a public library, book store or newsstand.(47) In applying the analogy, the court found that, although CompuServe may decline to carry a given publication and may terminate a subscriber's access to the system, the amount of matter transmitted through CompuServe's network is so great and is uploaded so quickly, that CompuServe has no reasonable opportunity to know of defamatory matter being transmitted.(48) Moreover, the Cubby court found that CompuServe did not exercise any content control over the forum at issue, which was managed by an unrelated company pursuant to a contract with CompuServe.(49)

Given these considerations, the Cubby court reasoned that "CompuServe has no more editorial control over [the forum] than does a public library, book store or newsstand, and it would be no more feasible for CompuServe to examine every publication it carries for potentially defamatory statements than it would be for any other distributor to do so."(50) Thus, since CompuServe did not have editorial control over the forum at issue, the Cubby court ruled that CompuServe could not be held liable as a publisher.(51)

The Cubby court next evaluated CompuServe's liability as a distributor. Based on the very same evidence that the court used to find that CompuServe should not be treated as a publisher--lack of content control--the court ruled that CompuServe did not have "reason to know" of the allegedly defamatory statements at issue; therefore, the company could not be held liable for defamation as a distributor either.(52)

Because of Cubby's reliance on the large information flow that is common to most online service providers, the issue of provider liability for defamation appeared to be settled. However, a 1995 New York state court decision, Stratton Oakmont Inc. v. Prodigy Services Co.,(53) substantially muddied the waters that Cubby had attempted to clear.

In Stratton, Stratton Oakmont, a securities investment banking firm, sued Prodigy for defamation regarding messages posted by an unidentified person on Prodigy's "Money Talk" electronic bulletin board.(54) On its motion for partial summary judgment, Stratton argued that Prodigy, unlike CompuServe, exercised sufficient editorial control over its bulletin boards to render it a "Publisher" for defamation purposes.(55) Evidence of Prodigy's editorial control included the use of automatic software screening of certain offensive words and phrases, and the use of "board leaders," independent contractors who were charged with monitoring and editing bulletin board postings to ensure compliance with content guidelines set forth by Prodigy.(56)

A trial-level court in New York agreed with the plaintiff and held Prodigy to be a publisher for defamation purposes as a matter of law.(57) In reaching its decision, the Stratton court distinguished the case from Cubby on two grounds. First, the court found that since Prodigy advertised and marketed itself as a family-oriented computer network that policed the content of messages posted on its bulletin boards, it "held itself out to the public and its members as controlling the content of its computer bulletin boards."(58) Second, the court held that by "actively utilizing technology and manpower to delete notes from its computer bulletin boards on the basis of offensiveness and `bad taste,"' Prodigy made decisions as to content.(59) It thereby "uniquely arrogated to itself the role of determining what is proper for its members to post and read on its bulletin boards."(60)

The Stratton court ultimately concluded that Prodigy's representations and filtering process constituted a "conscious choice to gain the benefits of editorial control."(61) The company's active involvement in the editorial process was sufficient to deem Prodigy a publisher, rather than a distributor,(62) and subjected it to the heightened standard of defamation liability without proof of fault.(63)

The critical aspect of the Stratton decision for online provider liability purposes is that the court attributed publisher status to Prodigy on the grounds that Prodigy made some decisions as to content through its software screening program and "board leaders."(64) Conspicuously, however, the court made no finding as to whether the quantum of control exercised by Prodigy was sufficient to impute knowledge of the contents of the 60,000 or more messages posted each day on Prodigy's bulletin boards.(65) Nor did the Stratton court find that the nature of control exercised by Prodigy was such that it should have known of the alleged defamation.(66) In its only attempt to even touch on these issues, the Stratton court compared Prodigy's situation to the defendants in Auvil v. CBS "60 Minutes."(67) In Auvil, apple growers sued a television network and local affiliates for an investigative report produced by the network and broadcast by the affiliates.(68) The record established that although the affiliates had the power to exercise editorial control over programming, no such control was employed by the defendants with respect to the particular broadcast at issue.(69) The plaintiffs, however, argued that because the affiliate station had the authority and ability to censor programs, a concomitant duty to censor thereby arose.(70) Explicitly rejecting the plaintiffs' theory, the Auvil court held that the defendants were not publishers and did not have the requisite fault to impose defamation liability:

[P]laintiffs construction would force the creation of full time editorial

boards at local stations throughout the country which possess

sufficient knowledge, legal acumen, and access to experts to

continually monitor incoming transmissions and exercise on-the-spot

discretionary calls or face $75 million lawsuits at every turn.

That is not realistic.(71)

In an attempt to distinguish Auvil, the Stratton court noted that "Prodigy has virtually created an editorial staff of Board Leaders who have the ability to continually monitor incoming transmissions and in fact do spend time censoring notes."(72) In doing so, the Stratton court erroneously presumed that Prodigy's Board Leaders had the ability, "sufficient knowledge, legal acumen, and access to experts"(73) necessary to review the transmissions for defamatory character. The evidence, however, showed that Prodigy's content control efforts were not in any way oriented towards detecting defamation.(74)

The Stratton court's unwillingness to seriously consider the nature and extent of Prodigy's content control efforts on the question of whether Prodigy should be imputed with knowledge of defamatory material effectively enlarged the publisher category to include all providers exercising any control over content. This reasoning subjects providers to liability for defamation regardless of whether they had any reason to know of its existence or opportunity to prevent its transmission.(75) The reason for the Stratton court's expansion of the publisher category to include providers exercising any content control is unclear. By making such a grouping, the court evaded the question of what specific amount and nature of content control gives a provider reason to know of transmitted defamatory material.(76) The Stratton court's hesitancy to confront this issue illustrates the difficulty of applying the publisher/distributor distinction to the online medium, particularly since content control varies widely in nature and degree across online services.

The ramifications of the Stratton decision are diverse and detrimental. By imposing strict liability on online service providers merely for exercising content control, regardless of the nature or quantum of such control, Stratton appears to contravene deeply-embedded First Amendment principles which prohibit faultless defamation liability.(77) The Stratton approach is also undesirable in that it gives providers an incentive to adopt a "blind eye" policy and exercise no content control at all.(78) A mass abandonment of content control by online providers would undoubtedly cause an increase in defamatory, obscene, and vulgar transmissions,(79) as well as reduce opportunities and efforts for detection.(80)

The Stratton approach, beyond its counterproductive effect on the aims of defamation law, also represents a major obstacle to the development of the electronic information services industry. By equating content control with fault, Stratton presents online service providers with two unenviable options for avoiding defamation liability: providers must either bear the immense burden of comprehensive screening for defamation or abandon all content control efforts in order to avoid being labeled a publisher.(81) Under the first option, even the largest commercial providers would have difficulty absorbing the cost of exhaustive screening. Providers attempting to monitor and screen all content for defamation would be forced to curtail valuable and innovative features such as interactive and real-time communication forums, because content control over such features would be either extremely difficult or impossible.(82) Inevitably, subscriber-generated and third-party content would be diminished by the necessity to keep screening costs manageable.(83) Postings awaiting approval by the providers would also be significantly delayed.(84)

Small operations, which represent the majority of America's 150,000 bulletin board systems and are instrumental to online diversity, would suffer the most from a strict liability regime.(85) Many would be forced to shut down for lack of resources to screen all content for defamation.(86) Furthermore, the looming prospect of liability would deter new operators from entering the market,(87) and, ultimately, the online industry would be dominated by a few large corporations. To date, the online medium has not been afflicted by a contraction in the number of outlets for free expression.(88) A significant retraction would be highly negative since outlet diversity is widely regarded as one of the online medium's most unique and laudable features.(89)

The second option available to providers responding to the Stratton decision, abdicating all control over content to avoid defamation liability, also bodes poorly for the online industry. Without the ability to maintain order within their systems, online providers would have little in the way of product differentiation and value to offer consumers. While bookstores can, without subjecting themselves to greater liability, monitor their shelves to ensure that cookbooks are not erroneously placed in the biography section, the Stratton approach prevents online providers from similarly ensuring that forums devoted to a particular topic remain so. This disorder would result in users wasting time and generally diluting the value of dedicated forums.(90) Moreover, by abdicating content control to avoid strict liability, providers could do nothing to protect users from online harassment, vulgarity, child pornography and other blatantly objectionable material.(91) Thus, content control is valuable because it protects users and allows providers to create an online environment tailored to the market segment targeted by their companies. This, in turn, promotes healthy competition and diversity of service offerings.(92)

In sum, the Stratton strict liability approach for online service providers exercising content control would greatly diminish the diversity of online operators, as well as the amount of interactive content available on the services that survive. In addition to suppressing the development of a vibrant online industry, the Stratton approach poses a corresponding threat to the First Amendment by inhibiting the free flow of information.(93) Moreover, the aspect of online expression most likely to suffer from the Stratton approach is interactive subscriber-generated content, which more than any other facet of online expression promotes First Amendment interests by transforming passive consumers into active producers and disseminators of information and ideas.(94)

In response to the perceived detrimental effects(95) of Stratton, Congress included provisions in the Telecommunications Act of 1996 to overrule this and similar decisions.

V. THE TELECOMMUNICATIONS ACT

On February 8, 1996, President Clinton signed into law the Telecommunications Act of 1996.(96) One section of the sweeping legislation, referred to as the "Communications Decency Act" ("the Act"), imposes criminal penalties for "knowingly" transmitting "indecent" or "patently offensive" material to minors by means of a telecommunications device.(97) The Act establishes a defense to liability for a person that "has taken, in good faith, reasonable, effective, and appropriate actions under the circumstances to restrict or prevent access by minors ... to [an indecent] communication."(98)

On June 11, 1996, in the case of ACLU v. Reno,(99) a three-judge panel of the United States District Court for the Eastern District of Pennsylvania declared the indecency provisions of the Act unconstitutional and enjoined their enforcement.(100) The Reno court's decision was limited to Section 223(a) and 223(d) of the Act and did not affect the "Good Samaritan Provision" of 47 U.S.C. [sections] 230(c)(1).(101) The Good Samaritan Provision states that "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."(102) This Provision prevents an online service provider from being treated as a publisher of content that it transmits but does not create.(103) In enacting this section, Congress recognized that an online service provider's efforts to avoid liability under the Act, by making good faith efforts to screen indecent matter, would subject it to strict liability for defamation under Stratton. Hence, Congress specifically included this section to overrule Stratton "and any other similar decisions which have treated such providers and users as publishers or speakers of content that is not their own because they have restricted access to objectionable material."(104)

The flaw in the Good Samaritan Provision is that it does not protect online providers from liability for exercising content control. Simply foreclosing publisher status for providers exercising content control does not go far enough; content control may also implicate distributor liability, if a court determines that such control constitutes "reason to know" of defamatory material on its system.(105)

Moreover, precedent suggests that courts will likely subject providers to distributor liability based on content control.(106) The decision in Cubby, Inc. v. CompuServe Inc.,(107) which survives the Telecommunications Act because it did not treat CompuServe as a publisher, held that CompuServe was not a publisher because it did not exercise content control over the forum as traditional publishers do. For the same reason, the Cubby court found that CompuServe could not be held liable as a distributor because CompuServe's lack of content control meant that it could not have had "reason to know" of defamatory material on the forum.(108) When cases arise in which the provider does exercise some content control over the forum, courts may distinguish Cubby and find that the provider did have reason to know of defamatory material by virtue of such content control. The provider would then be subject to liability as a distributor.

Courts also may follow Stratton's reasoning by holding providers liable, albeit as distributors rather than publishers, for exercising any content control over the forum. This is because courts have little guidance in determining at what point a provider's control over its system amounts to "reason to know" of defamation on the system. The difficulty of applying the "reason to know" standard in a medium where content control varies so widely in nature and degree likely encouraged the Stratton court to decide to subject Prodigy to defamation liability for exercising any content control.(109)

Even if courts do not follow Stratton's "any control equals liability" reasoning, they will still be confronted with the delicate task of drawing a line between "reason to know" and "no reason to know" in the legal frontier of cyberspace. Some courts, following the reasoning in Cubby, will find that the feasibility limitations on controlling Internet content prevents the imposition of distributor "reason to know" liability. Other courts will follow such reasoning only in Cubby's particular factual context where online service providers have not exercised any content control at all. Thus, it is likely that courts will apply the "reason to know" standard inconsistently, leaving online service providers with little measure of predictability as to what type or degree of control is safe to exercise. This uncertainty provides a disincentive for providers to exercise content control, which is exactly what Congress sought to prevent by overruling Stratton.(110)

In sum, although Congress overruled Stratton, it did not fully sever the underlying link between content control and liability, which survives Stratton in the form of distributor liability. The continued presence of this link, which is inherently problematic in the online context, is likely to perpetuate some, if not all, of the ill effects of Stratton, as outlined above. As will be shown, a workable model for online provider liability can be developed to completely break the traditional link between content control and liability(111) and shift the focus of provider liability to post-transmission actions to deter and remedy abusive expression.

VI. A MODEL FOR ONLINE DEFAMATION LIABILITY

In order to arrive at an appropriate solution to the disharmony created by applying traditional methods of defamation law to the online medium, it is necessary to address the critical differences between traditional media and the online medium. For purposes of online provider liability, the key differences are: (1) the rapidity of transmission;(112) (2) the much larger quantity of content distributed online;(113) and (3) the lack of gatekeepers between information producers and mass distribution online.(114) While the relatively limited content of newspapers, magazines, and books is subject to editorial review before publication and distribution, online traffic is so voluminous and is distributed so quickly that much of it is not feasibly subject to editorial control.(115) Compounding the gatekeeping problem is the fact that, unlike information producers in traditional media, producers of information online are largely anonymous.(116) Author anonymity provides little incentive for care and discretion among such producers. This combination of features signifies a greater threat of reckless and unlawful use in the online medium than in other mass media forms.(117)

These distinctive characteristics of the online medium point to several basic principles which a regime for online provider defamation liability should incorporate. First, in the absence of actual knowledge, online providers must be relieved of liability for transmitting content created by others because online traffic is far too vast and instantaneous to feasibly monitor. Primary liability for online defamation should be focused on the creator of the message. Second, the comparatively greater potential for abusive and unlawful expression online demands a system that both deters such conduct and affords effective remedies for persons harmed by the expression.(118) Third, because the most valuable feature of the online medium is its capacity to promote the free and open exchange of ideas, such a model should encourage private remedies and initiatives rather than impose centralized control. The model should accommodate the decentralized(119) nature of the medium, avoid infringing on First Amendment rights and interests, and allow flexibility for technological and institutional change.(120)

This Article proposes that Congress incorporate the above principles into a regime for online defamation liability by replacing the existing Good Samaritan defense(121) in the Telecommunications Act with a rule similar to the following:

No provider of an interactive computer service shall be subject to

defamation liability for transmitting information provided by

another information content provider, unless the provider:

(a) has actual knowledge of the information's defamatory

character prior to transmission; or

(b) after transmission, fails to exercise reasonable care after

a person notifies the provider of the information's allegedly

defamatory character and supports such allegation with a

prima facie showing of the information's defamatory

character. For purposes of this subsection, each of the

following actions by the provider shall constitute

presumptive evidence of "reasonable care":

(i) Removal of the information from the provider's

service within a reasonable time;

(ii) If the alleged defamation occurred in a forum owned

or controlled by the provider, furnishing the injured

person with access to said forum for purposes of

rebuttal within a reasonable time;

For purposes of this subsection, each of the following actions by

the provider shall constitute non-presumptive evidence of

"reasonable care":

(a) Reasonable efforts to identify the creator of the alleged

defamation;

(b) Implementation of measures which promote subscriber

financial responsibility for unlawful online expression.

The proposal above will be discussed within the framework of its three essential components.

A. Pre-Transmission: Actual Knowledge

The first essential element of an appropriate online defamation liability scheme is to protect online service providers from liability for the original transmission of another's defamatory content unless the provider had actual knowledge of the defamation prior to transmission. This rule is designed to avoid the pitfalls of the current Good Samaritan Provision, under which a provider may still be held liable as a distributor for having "reason to know" of transmitted defamatory material based on that provider's content control activities.(122) By requiring actual, rather than imputed, knowledge to impose liability, the proposed rule ensures that a provider could not be held liable for other's content simply for exercising content control over their systems, regardless of whether the provider had any feasible opportunity(123) to learn of and remove the defamatory material prior to transmission.

The actual knowledge standard proposed by this Article is actually an affirmance of the existing common law principle that distributors are liable if they know that the content distributed is defamatory.(124) As a practical matter, a plaintiff typically would be able to establish such liability for pre-transmission knowledge only where the provider was: (1) notified of the particular offending matter prior to transmission and had the power to prevent the transmission; or (2) was clearly using its service as an instrument of unlawful expression by systematically tolerating and encouraging subscribers to post defamatory content. Such situations are more common in the case of small BBSs than larger services and, because of greater profit potential, tend to occur in contexts other than defamation, such as obscenity and copyright infringement.(125)

In sum, because the reality of rapid and voluminous online transmission means that a provider cannot reasonably be expected to pre-screen for defamation, an actual knowledge rule would appropriately shift the reasonableness inquiry to the post-transmission period.

B. Post-Transmission: Reasonable Care

Basing defamation liability on a post-transmission "reasonable care" inquiry is merely an adaptation of the long-established common law rule that "[o]ne who intentionally and unreasonably fails to remove defamatory matter that he knows to be exhibited on land or chattels in his possession or under his control is subject to liability for its continued publication."(126) The rationale behind this type of "republication"(127) liability is that one has a duty of reasonable care not to permit the use of his land for a purpose damaging to others.(128)

During the post-transmission period, an aggrieved person would notify the provider of the alleged online defamation. As part of the notification, the person would be required to present a prima facie case(129) of the content's defamatory character in order to trigger the provider's duty to act with "reasonable care" within the meaning of the statutory provision.

1. Prima Facie Case

Placing a prima facie evidentiary burden on the aggrieved claimant would serve several useful purposes. First, such a requirement would deter frivolous claims, which would ease the provider's administrative and investigative burden of responding to claims. Further, the approach recognizes that most providers do not have the resources, expertise, or contextual information at hand to properly evaluate claims of defamation without assistance from the claimant. Second, the prima facie requirement would minimize the incentive for providers to remove material whenever notified of its offending nature, regardless of the merits of the defamation claim.(130) In so doing, it serves First Amendment interests in free expression unfettered by censorship.

Requiring the aggrieved person to present prima facie evidence of defamation is also supported, to some extent, by existing defamation law. Numerous states have enacted laws prohibiting or limiting a plaintiff's defamation recovery from a media defendant unless a demand for retraction was first made and refused.(131) While none of these state laws, to this author's knowledge, explicitly require that the demand present a prima facie case of defamation, many do require that the demand give a detailed explanation of how the allegedly defamatory matter is false and injurious.(132)

Finally, the prima facie requirement is reflected in currently evolving law in the analogous context of copyright infringement. In Religious Technology Center v. Netcom On-Line Communication Services, Inc.,(133) the court considered the question of whether a BBS should be subject to contributory copyright infringement(134) liability for failure to remove infringing material posted by a subscriber after being notified of the infringement. In finding that a question of fact existed on the issue so as to preclude summary judgment, the Netcom court held that the claimant must provide "the necessary documentation" to show likely infringement before the provider can be subject to contributory infringement liability, because "it is beyond the ability of a BBS operator to quickly and fairly determine when a use is not infringement" in the absence of such documentation.(135) Considering the substantial difficulties in evaluating a defamation claim,(136) the Netcom court's reasoning applies with at least equal force in the defamation context.

2. Presumptive Evidence of Reasonable Care

The proposed rule defines "reasonable care" in a way that creates flexible incentives for online providers to take action aimed at deterring and remedying unlawful expression online. Such flexibility is necessary because of the broad range of online providers operating in cyberspace: what is reasonably expected of a large commercial online service with extensive resources may not necessarily be reasonably expected of a small, non-commercial BBS.(137) The definition is divided into two categories: presumptive and non-presumptive evidence of reasonable care. Actions creating a presumption(138) of reasonable care in favor of the provider would include removing the offending material within a reasonable time and providing rebuttal access to the claimant in the same forum that the defamation occurred within a reasonable time. These actions are considered to be the most important because they directly remedy the injurious expression by destroying it or exposing its falsity.(139) Either of the actions alone would invoke the presumption and, therefore, give providers maximum flexibility in responding to the claim. Neither of these actions would conclusively(140) insulate providers from liability so as to induce them to take the additional non-presumptive measures to avoid liability.

a. Removal of the Offending Material

Under the first option for presumptive evidence of reasonable care, the online provider would remove the material(141) from its system within a reasonable time after the claimant notifies the provider of its existence and demonstrates its defamatory character.(142)

The removal option is derived from the common law rule discussed above, which provides that one is liable for republication of defamatory matter exhibited on land or chattels in one's possession or under one's control.(143) This republication rule has been adapted to a variety of long-established defamation law principles.(144) Under the Restatement rule, "[o]ne who intentionally and unreasonably fails to remove defamatory matter that he knows to be exhibited on land or chattels in his possession or under his control is subject to liability for its continued publication."(145) The common law has adapted this principle to a variety of contexts, most commonly in "graffiti" cases, where premises owners are required to remove, within a reasonable time, defamatory graffiti placed on the premises by patrons.(146) The Restatement rule is one aspect of traditional defamation law that adapts reasonably well to the online context.

In another sense, the removal option is merely an extension of state retraction statutes. Just as a media defendant's publishing or broadcasting of a retraction is competent evidence in favor of limiting liability under retraction laws,(147) an online provider's removal of defamatory matter is competent evidence in favor of limiting liability under the proposed Good Samaritan law. It is arguable that removal under the proposed model justifies limited liability even more forcefully than retraction in traditional media, since it accomplishes what retraction cannot: obliteration of the offending material from the defendant's product.(148)

In order to establish what constitutes a "reasonable time" after notice of transmitted defamatory material, online service providers could join together to promulgate an industry-wide standard or code of conduct. Although a "reasonable time" would vary somewhat depending on the circumstances, providers would benefit from the relative certainty afforded by an industry custom. Such certainty is bolstered by the fact that courts often defer to industry standards when ascertaining the reasonableness of a defendant's conduct.(149)

b. Rebuttal Access

The second type of presumptive evidence regarding reasonable care provides that if the defamation was posted on a forum owned or controlled by an online service provider, the provider would have the option of granting to an aggrieved claimant, who has established a prima facie case of defamation, rebuttal access to the forum on which the defamation occurred. Such rebuttal access can be extremely beneficial in that if a provider feels that the defamation claim is of questionable merit, or otherwise does not wish to or cannot(150) remove the offending material, it may still protect itself from liability without resorting to censorship by providing rebuttal access.

Offering online rebuttal access in exchange for limited liability is consistent with Supreme Court precedent and defamation policy. In Gertz v. Robert Welch, Inc.,(151) the Supreme Court reaffirmed that a higher standard for proving defamation applies to public figure and public official plaintiffs.(152) In justifying its holding, the Gertz court stated that "the first remedy of any victim of defamation is self-help .... [p]ublic officials and public figures usually enjoy significantly greater access to the channels of effective communication and hence have a more realistic opportunity to counteract false statements than private individuals normally enjoy."(153) The value of response in remedying defamation is also reflected in retraction/rebuttal statutes, which limit defamation liability of media entities that provide a timely retraction or editorial rebuttal upon demand.(154)

Despite the high value placed on rebuttal as a remedy to defamation, as a practical matter, space limitations largely preclude rebuttal access by private plaintiffs. By contrast, the online medium is not bounded by physical space.(155) Assuming the forum operator is willing to comply, an online rebuttal can also be easily transmitted in the same forum where the defamation took place.(156) Online rebuttals can also be made more quickly(157) than rebuttals in traditional media, and at a much lower cost.(158) These intrinsic features of the online medium, combined with an incentive for forum operators to provide rebuttal access, would afford private plaintiffs with an enhanced ability to rebut online defamation.(159) This enhanced means of redress also justifies a presumption against liability for cooperating providers under the reasoning of Gertz v. Robert Welch, Inc.(160)

It is also arguable that online rebuttal is uniquely effective because the harm caused by online defamation is likely to be less than in other media. The diminished harm stems from two related factors: the relative lack of gatekeepers in cyberspace and anonymity. As stated above, traditional forms of mass media have been the province of institutions, rather than individuals, and typically have the benefit of editors responsible for ensuring the quality and accuracy of content communicated. Based on this institutional gatekeeping, consumers are likely to ascribe a significant level of credibility to traditional mass media content. In cyberspace, however, a great deal of content is created and transmitted by individuals, without the involvement or backing of institutions or gatekeepers. Thus, consumers are likely to regard this type of online mass communication as inherently less credible, more akin to communication by a stranger on a street-corner than an established institutional media outlet.(161)

Compounding the diminished credibility of individually-created online mass communication is the anonymity of the statements. Anonymous remarks are inherently devalued; they are considered less credible than identified remarks because they are "costless" to the speaker and easy to make.(162) Thus, the impact of anonymous defamation on the plaintiffs reputation is generally less than the same remarks would be if backed up by an identified, reputable individual.(163) Moreover, because the rebuttal is backed up by a real person, it follows that greater credibility would be accorded the rebuttal than the anonymous defamation, resulting in a minimal adverse effect on the person's reputation. Because defamation law accounts for the effect of the offending statement on the person who hears it,(164) the discounted credibility of much online content, and the comparatively greater credibility of identified rebuttals, are significant factors weighing against defamation liability for online providers who allow rebuttal access.

In sum, the enhanced capability of the online medium to facilitate effective rebuttal of defamatory matter bespeaks the power of the medium to promote First Amendment interests,(165) which command that "error of opinion may be tolerated where reason is left free to combat it."(166) Given this potent capacity for redressing falsehoods and advancing freedom of expression, limiting the liability of online service providers that offer rebuttal access to legitimately aggrieved claimants is justified.(167)

As with the removal option, the mechanics of the rebuttal option proposed by this Article may be developed through voluntary industry customs and standards. In determining the reasonableness of a provider's furnishing of rebuttal access, courts may also draw from analogous legal precedent, particularly from retraction statutes(168) and right-to-reply statutes.(169)

3. Non-Presumptive Evidence of Reasonable Care

The model proposed by this Article designates two provider actions which do not give rise to a presumption of non-liability, but nevertheless constitute evidence of reasonable care that courts must consider: (1) making reasonable efforts to identify the creator of the alleged defamation; and (2) implementing measures which promote subscriber financial responsibility for unlawful online expression. These actions are not accorded presumptive status because they do not directly redress the defamation; rather, they are aimed at indirectly redressing the defamation by increasing the likelihood that the injured person will recover monetary compensation for the reputational harm.

The non-presumptive actions set forth are intended to shift the primary onus of liability to the creator of the defamation. This shift is based on the premise that increased communication power afforded consumers by the online medium requires increased responsibility for the content of what is communicated. From a purely legal point of view, a subscriber who posts an unprivileged defamatory falsehood on the provider's system is liable for defamation. However, placing liability on the individual sender of the defamation may have the practical effect of providing an inadequate remedy for the defamed person because subscriber postings are typically anonymous.(170) Even if an anonymous sender can be identified, the remedy to a defamed person might still be inadequate if the message sender is financially unable to remedy the harm caused by the defamation.(171) These concerns can be adequately addressed by providing incentives for the provider to shape its relationship with subscribers in such a way as to maximize subscriber accountability.

Under the first non-presumptive provider action, each provider would be required to take all reasonable efforts to identify and disclose an offender to a claimant. This induces each provider to structure Its system in a way that allows for the ready identification of the subscriber-generated content transmitted on the system. This proposition is not inconsistent with current practices, for subscriber identifiability already exists through passwords and account codes. thus, although subscriber messages are typically anonymous to the online user, they often can be identified by the provider,(172) which can in turn inform the aggrieved claimant.(173)

Aside from the legal incentive provided by the proposed model, providers are likely to try to track down offenders because such an action would be useful in attempting to settle plaintiffs' claims,(174) and it would also act as a deterrent, assisting in the prevention of repeat offenses. Providers increasingly have the ability to monitor patterns of subscriber activity, which they do for marketing reasons as well as to track illicit conduct.(175) Online providers commonly suspend subscribers who post unlawful material.(176) Given providers' capabilities of exposing and punishing offenders, combined with the incentives for identifying unlawful subscribers, it is likely that a policy of aggressive efforts to identify and expose subscribers who post defamatory material would become customary in the industry. Such efforts would surely provide a substantial deterrent to potentially abusive subscribers in addition to assisting defamation claimants pursue their remedies.(177)

The second non-presumptive measure calls for providers to implement subscriber financial responsibility policies. Such policies aim to mitigate the "shallow pocket" problem(178) of focusing liability on an individual subscriber. A number of options could be available to providers under this measure,(179) including requiring subscribers to post a small liability bond as a condition of membership or adding a surcharge to subscriber fees for the purpose of procuring insurance to satisfy defamation judgments against subscribers. In either case, the bond or insurance could be made available to claimants in cases where the offending subscriber (or unauthorized person using a subscriber's account code) was not identified. This would further assist the defamation claimant in recovering for the offender's conduct, while lessening the need for a claimant to sue the provider in search of a "deep pocket."

The incentives described above are designed to induce online service providers to take steps to deter online misconduct and facilitate the defamed person's recovery for the defamation. Some commentators have argued that a strict liability regime would better motivate providers to take such deterrent and remedial measures.(180) This author disagrees. As discussed above, strict liability would drive many providers out of business and prevent others from entering the market, stifling diversity and concentrating online power in the hands of the few large corporations which could afford to stay in the market. This is too high a price to pay to induce providers to take deterrent and remedial measures which can be induced in less draconian ways. Additionally, many providers are already taking various measures to reduce online misconduct because it is in their best business interest to do so.(181)

VII. CONCLUSION

This Article has attempted to demonstrate the negative consequences of applying the traditional publisher/distributor analogy of defamation law to the online medium. Online service providers do not fit neatly into either category because of their varying functional roles and content control efforts. Holding providers liable for defamation based on content control efforts ignores the reality that no amount of such efforts would be ipso facto sufficient to give a provider reason to know of defamatory material on its network. Moreover, it represents a centralized method of regulation which is at odds with the decentralized nature of the medium.

The value of the online medium lies in its singular capacity for nurturing human creativity, expression, and the robust exchange of ideas. The democratic and interactive nature of the medium permits the average consumer, for the first time in history, to transcend passivity and play an active role in the production and mass dissemination of information. While this unprecedented opportunity for the exchange of ideas is inevitably accompanied by greater potential for abusive expression, such abuses are more easily remedied online than in other media.

In lieu of centralized regulatory measures, which are adapted to old media forms, it is necessary to formulate new incentive-based models of regulation which are in harmony with the unique features of these new technologies. By shifting the onus of content liability from online service providers to content creators, while simultaneously instituting mechanisms which facilitate deterrence and remedial measures for online abuses, Congress and online providers can effectively reduce such abuses without sacrificing First Amendment interests and the health of the online industry.

(1.) As used in this Article, the term "online service provider" is interchangeable with the term "Interactive Computer Service" as defined by the Telecommunications Act of 1996. The Act defines the latter term to mean "any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions." 47 U.S.C. [sections] 230(e)(2). An "Access Software Provider" is "a provider of software (including client or server software), or enabling tools that do any one or more of the following: (A) filter, screen, allow, or disallow content; (B) pick, choose, analyze, or digest content; or (C) transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content." 47 U.S.C. [sections] 230(e)(4). Under these definitions, an online service provider may be loosely defined as any entity providing access by users to information contained in a networked computer server. The three largest commercial online service providers in the United States--America Online, CompuServe and Prodigy--serve approximately 12 million subscribers, and had gross revenues of more than $1 billion in 1995. See William J. Cook, Be Wary of Internet Casting Shadows on Copyright Holders, Chi. Law., Apt. 1996, at 60, 61, 62. Other providers include Delphi with 140,000 subscribers, eWorld with 90,000, and Genie with 75,000. See BRP Report 3Q95 Census Puts Online Use at 69 Million Households, Multimedia & Videodisc Monitor, Aug. 1, 1995, at 1. Access providers are online service providers that afford access to the Internet and electronic bulletin board services; they traditionally have not exercised substantial controls over their traffic, although they are developing increased monitoring capabilities. See Religious Tech. Ctr. v. Netcom On-Line Communications Serv., Inc., 907 F. Supp. 1361, 1368 (N.D. Cal. 1995) (finding that where an Internet access provider was able to take simple measures to prevent further copyright infringement by deleting infringing content that it had notice of, the provider could be liable for contributory copyright infringement). Another type of online service provider is a bulletin board service (BBS), which is a computer system that can be remotely accessed by users and is administered by a system operator (commonly referred to as a "sysop") who controls the boards, limits access to the system, and establishes rules for participation. See Edward A. Cavazos & Gavino Morin, Cyberspace and the Law: Your Rights and Duties in the On-Line World 2-3 (1994). Although many BBSs are local, some have acquired a national membership. Commercial online service providers also provide access to BBSs, and some BBS systems (but not all) offer direct or indirect links to the Internet. See ACLU v. Reno, 929 F. Supp. 824, 833-834 (E.D. Pa. 1996).

(2.) The source of content transmitted by online service providers is varied. Many providers create their own content, contract with other information providers to post content on their systems, and provide access to the Internet. In addition, a substantial amount of content is subscriber-generated, either through Email, postings on bulletin boards or discussion groups, or real-time communication in "chat" rooms. See Plaintiffs' Mem. in Sup. of Mot. for T.R.O. & Prelim. Inj., [sections] B(l), ACLU, 929 F. Supp. at 824 (No. 96-511) [hereinafter Plaintiffs' Memorandum].

(3.) 47 U.S.C.A. [subsections] 101-710 (1996).

(4.) In 1996, two specially appointed panels of United States District Courts, one in Philadelphia and one in New York, declared the Act to be unconstitutional. See ACLU, 929 F. Supp. at 883; Shea v. Reno, 930 F. Supp. 916 (S.D.N.Y. 1996). In December, 1996, the United States Supreme Court granted certiorari to review the Philadelphia decision. Edward Felsenthal, Justices to Rule on Law Barring Internet Smut, Wall St. J., Dec. 9, 1996, at B1.

(5.) In ACLU v. Reno, the court struck down 47 U.S.C.A. [subsections] 223(a)(1)(B), (a)(2), and (d)(2) only. See id.

(6.) 47 U.S.C.A. [sections] 230 (c)(1) (1996).

(7.) See ACLU, 929 F. Supp. at 883.

(8.) 23 Media L. Rep. (BNA) 1794 (N.Y. Sup. Ct. 1995).

(9.) 47 U.S.C.A. [sections] 230(c)(2)(A) (1996).

(10.) See Stratton, 23 Media L. Rep. at 1798.

(11.) See H.R. Conf. Rep. No. 104-458, at 194 (1996) ("One of the specific purposes of this section is to overrule Stratton Oakmont v. Prodigy and any other similar decisions which have treated such providers and users as publishers or speakers of content that is not their own because they have restricted access to objectionable material.").

(12.) See Stratton, 23 Media L. Rep. at 1796 (finding that distributors such as libraries or bookstores are subject to liability for others' defamatory statements only if they know or have reason to know of the defamatory statements at issue (citing Cubby, Inc. v. CompuServe Inc., 776 F. Supp. 135, 139 (S.D.N.Y. 1991)); Auvil v. CBS "60 Minutes," 800 F. Supp. 928, 932 (E.D. Wash. 1992), aff'd, 67 F. 3d 816 (9th Cir. 1995), cert. denied, 116 S. Ct. 1567 (1996)).

(13.) Although this Article focuses on defamation liability in particular, the conclusions reached have implications for all forms of content liability, including obscenity, copyright or trademark infringement, misappropriation and invasion of privacy.

(14.) See Phillip H. Miller, New Technology, Old Problem: Determining the First Amendment Status of Electronic Information Services, 61 Fordham L. Rev. 1147, 1162 (1993). The initial model for regulating telegraphy was the railroad, which was treated as a common carrier. When the telephone was subsequently introduced, Congress and the federal courts extended the common carrier form of regulation to that medium by analogy to the telegraph. See id. at 1163.

(15.) See Plaintiffs' Memorandum, supra note 2, [sections] (C); see also supra text accompanying note 2.

(16.) See, e.g., Stratton, 23 Media L. Rep. (BNA) at 1796 (finding that where a computer network, like CompuServe, exercises little or no editorial control over the content of publications in its computer banks, it is "in essence, an electronic, not-for-profit library," but where an online service, like Prodigy, purports to control the content of its bulletin boards, it functions more like a publisher).

(17.) See id As demonstrated by a number of well-publicized incidents, automatic screening software has proven rather clumsy at the current level of technology. See id.; see also Amy Harmon, On-Line Service Draws Protest in Censorship Flap, L.A. Times, Dec. 2, 1995, at D1 (reporting that after its obscenity screening software purged the word "breast" from its files, America Online was barraged with protests by women who used this service to share information about breast cancer); J. David Loundy, Holding the Line, On-Line, Expands Liability, Chi. Daily L. Bull., June 8, 1995, at A6 (reporting that when Vietnamese Prodigy users tried to translate Vietnamese tonal marks into roman alphabet equivalents, the system ground to a halt because the screening software caught large numbers of the letter combination "s-e-x"); Andrew Brown, Home Computer: Prodigy User Strikes Out, Independent, Aug. 27, 1993, at 24 (reporting that Prodigy's obscenity screening prevented users from posting the name of a former Japanese Prime Minister, Noboru Takeshita).

(18.) See Stratton, 23 Media L. Rep. at 1796.

(19.) See id. As of 1993, Prodigy was posting over 60,000 subscriber messages per day, and has been forced to relinquish its former policy of manually reviewing all messages prior to posting. See id.

(20.) See Plaintiffs' Memorandum, supra note 2, [sections] (B)(3); see also supra text accompanying note 2.

(21.) See id.

(22.) There are presently three primary methods of remote information retrieval on the Internet. The first method is "ftp" (or file transfer protocol), which lists the names of computer files available on remote computers and allows the user to transfer files to her individual local computer. See ACLU v. Reno, 929 F. Supp. 824, 835 (E.D. Pa. 1996). Another approach uses a program and format named "gopher" to guide a user's search through the resources available on a remote computer. See id. The third method is the World Wide Web. The Web utilizes a "hypertext" formatting language called HTML, which enables a user to jump from one source to other related sources by clicking on the link. See id at 836. Hyperlinking allows information to be accessed and organized in flexible ways and allows people to locate and efficiently view related information even if the information is stored on numerous computers all around the world. See id.

(23.) See id. at 832 (noting that "it would not be technically feasible for a single entity to control all of the information conveyed on the Internet").

(24.) See Frederick Lim, Obscenity and Cyberspace: Community Standards in an On-Line World, 20 Colum. J. L. & ARTs 291, 309 (1996). Usenet traffic flows over a wide range of networks, including the Internet and dial-up phone links. See Religious Tech. Ctr. V. Netcom On-Line Communications Serv., Inc., 907 F. Supp. 1361, 1365 (N.D. Cal. 1995).

(25.) See Lim, supra note 24, at 308-09.

(26.) An example of the difficulty of controlling content on Usenet newsgroups occurred in December, 1995, when a Munich prosecutor declared over 200 sexually explicit newsgroups violative of German pornography laws, forcing CompuServe to block access to all its subscribers, including Americans. See John Markoff, OnLine Service Blocks Access to Topics called Pornographic, N.Y. Times, Dec. 29, 1995, at Al. CompuServe subsequently restored access to the newsgroups, choosing instead to offer free software enabling parents to control the content received over their personal computers. See Peter H. Lewis, An OnLine Service Halts Restrictions on Sex Material, N.Y. Times, Feb. 14, 1996, at Al.

(27.) 18 U.S.C. [subsections] 2510-2522 (1988). The EPCA has an exception which provides that e-mail may be intercepted or disclosed if done pursuant to a court order. See [sections] 2511(2)(a)(ii).

(28.) See 18 U.S.C.A. [sections] 2511(1)(a).

(29.) See ACLU v. Reno, 929 F. Supp. 824, 843 (E.D. Pa. 1996) (noting that many online service providers "make available content of other speakers over whom they have little or no editorial control").

(30.) Numerous commentators have acknowledged the revolutionary impact of the online medium on the communication power of average citizens. See, e.g., Lim, supra note 24, at 295 (quoting Ralph Nader's characterization of cyberspace as "the lowest-entry-level-barrier mass communication system in history" and noting that an individual user who posts a message on a Usenet newsgroup can reach a global audience of millions); Cynthia L. Counts & C. Amanda Martin, Libel in Cyberspace: A Framework for Addressing Liability and Jurisdictional Issues in This New Frontier, 59 ALB. L. Rev. 1083, 1085 (1996) (asserting that the electronic superhighway has made possible an "egalitarian marketplace of ideas"); Lance Rose, Netlaw: Your Rights in the Online World XV (1995) (arguing that online technology is "the start of a social revolution, perhaps the most important structural advance in society in our lifetime"); Harley Hahn & Rick Stout, the Internet Yellow Pages 3 (1995) (referring to the Internet as "the first global forum, and the first global library").

The decentralization of mass communication power is particularly significant in light of the current trend towards the concentration of ownership and control over the American mass media. See Lim, supra note 24, at 295. One commentator has noted that by 1990, the majority of all major American media was controlled by twenty-three corporations, and that by 2000, this number could be reduced to six. See id. (citing Ben H. Bagdikian, The Media Monopoly 4 (4th ed. 1992)). The potential of the online medium to counteract this concentration of voices is testimony to the power and importance of the medium in promoting First Amendment interests.

(31.) Plaintiffs' Memorandum, supra note 2, [sections] (B)(3). See ACLU, 929 F. Supp. at 843 (acknowledging that "because of the different forms of Internet communication, a user of the Internet may speak or listen interchangeably, blurring the distinction between 'speakers' and `listeners' on the Internet").

(32.) See Jerry Berman & Daniel J. Weitzner, Abundance and User Control: Renewing the Democratic Heart of the First Amendment in the Age of Interactive Media, 104 Yale L.J. 1619, 1623-24 (1995) (explaining that the potential for open-access and decentralization presented by cyberspace networks overcomes the lack of diversity afforded by today's mass media, which are based on an architecture with a fixed number of available channels). See also Interactive Working Group Report to Senator Leahy, Parental Empowerment, Child Protection, & Free Speech in Interactive Media, July 24, 1995, at 4-5 [hereinafter Leahy Report] ("Unlike centralized broadcast radio and television services, there are no central control points through which either a single network operator or government censors can control particular content .... [The] proliferation of individual speakers stands in sharp contrast to broadcast television or even cable television, where one may count five, ten or perhaps one hundred speakers, each of whom controls a channel."), quoted in Plaintiffs' Memorandum, supra note 2, [sections] (B)(3) n. 45.

(33.) ACLU, 929 F. Supp. at 830-38.

(34.) 50 Am. Jur. 2d, Libel and Slander [sections] 2 (1995).

(35.) See, e.g., Gertz v. Robert Welch, Inc., 418 U.S. 323, 347 (1974) (finding that "so long as they do not impose liability without fault, the States may define for themselves the appropriate standard of liability for a publisher or broadcaster of defamatory falsehood injurious to a private individual").

(36.) Restatement (Second) of Torts [sections] 568 (1977).

(37.) See id. [sections] 577 cmt. a (1977).

(38.) See Smith v. Utley, 65 N.W. 744 (Wis. 1896) (holding a managing editor of a newspaper liable for publication of a defamatory article whether or not he actually knew of the defamation, because the matter is constructively under the editor's supervision).

(39.) The "reason to know" standard is another way of stating negligence. See Henry H. Perritt, Jr., Tort Liability, The First Amendment, and Equal Access to Electronic Networks, 5 Harv. J.L. & Tech. 65, 103 n. 195 (1992).

(40.) See Stratton Oakmont Inc. v. Prodigy Serv. Co., 23 Media L. Rep. (BNA) 1794 (N.Y. Sup. Ct. 1995); see also supra text accompanying note 9.

(41.) See Cubby, Inc. v. CompuServe Inc., 776 F. Supp. 135, 139 (S.D.N.Y. 1991) (citing Lerman v. Chuckleberry Publ'g, Inc., 521 F. Supp. 228, 235 (S.D.N.Y. 1981)).

(42.) See id.

(43.) See id. at 139-40. The rationale underlying the limitations on distributor liability was aptly expressed by the United States Supreme Court in Smith v. California, 361 U.S. 147 (1959). In this case, the Court struck down, on First and Fourteenth Amendment grounds, an ordinance imposing liability on a bookseller for possession of an obscene book, regardless of whether the bookseller had knowledge of the book's contents. See id. at 153. The Court reasoned:

"[Under a strict liability standard, e]very bookseller would be placed

under an obligation to make himself aware of the contents of every book

in his shop. It would be altogether unreasonable to demand so near an

approach to omniscience." The King v. Ewart, 25 N.Z.L.R. 709, 729

(C.A.). And the bookseller's burden would become the public's burden,

for by restricting him the public's access to reading matter would be

restricted. If the contents of bookshops and periodical stands were

restricted to material of which their proprietors had made an inspection,

they might be depleted indeed.

See id.

(44.) 776 F. Supp. 135 (S.D.N.Y. 1991).

(45.) See id. at 137. As part of CompuServe's information services, subscribers may obtain access to over 150 special interest "forums," which are comprised of electronic bulletin boards, interactive online conferences, and topical databases. See id.

(46.) See id. at 140.

(47.) See id.

(48.) See id. at 140-141.

(49.) See id. at 137.

(50.) Id. at 140.

(51.) See id. at 141.

(52.) See id.

(53.) 23 Media L. Rep. (BNA) 1794 (N.Y. Sup. Ct. 1995).

(54.) See id. at 1795.

(55.) See id. at 1796.

(56.) See id.

(57.) See id. at 1798.

(58.) Id. at 1797.

(59.) See Stratton Oakmont Inc. v. Prodigy Services Co., 23 Media L. Rep. (BNA) 1794, 1797 (N.Y. Sup. Ct. 1995).

(60.) Id.

(61.) Id. at 1798.

(62.) See supra notes 36-39 and accompanying text for a discussion of the distinction between publisher and distributor liability in the defamation context.

(63.) See Stratton, 23 Media L. Rep. (BNA) at 1797. If the Court had found that Prodigy was a distributor, rather than a publisher, it would have been "considered a passive conduit" that would not be "found liable in the absence of fault." Id. at 1796.

(64.) See id. at 1797. The Stratton court held that Prodigy was a publisher rather than a distributor because it was "clearly making decisions as to content." Id. Throughout the Stratton decision, the court expressly relies on the fact that Prodigy attempted to exercise some degree of control over content rather than on the nature or degree of control. See id.

(65.) If anything, the Stratton court cast doubt on the sufficiency of Prodigy's control efforts to trigger publisher status by recognizing that Prodigy's editorial control over its bulletin boards "is not complete." Id.

(66.) The briefs and the published opinion in Stratton suggest that the Court never considered the impact of the nature of content control on the publication question. See John B. Kennedy & Shoshana R. Davids, A Recent Decision Holding an Online Service Provider Liable for Defamation Could Have Far-Reaching Effects for Operators Who Want to Maintain Content Control, Nat'l L.J., July 10, 1995, at B7 ("There is no indication, for example, that the court examined the distinction between types of editing, such as employing screening software for specific vulgarities to control the tone of a service as opposed to comprehensive editing for libelous statements in the manner of a print publisher.").

In failing to consider this question, the Stratton court appears to have departed from precedent. In a 1984 New York State decision, a state supreme court held that a contract printer of newspapers who scrutinized the material he printed for nudity, profanity, and vulgarity--but not for defamation--did not thereby become a publisher for libel purposes. See Misut v. Mooney, 475 N.Y.S.2d 233 (Sup. Ct. 1984). The Misut court recognized that editing for obscenity and profanity is significantly different from editing that includes "an obligation to confirm facts, check sources and to thereby be responsible for the truth of printed statements." Id. at 236.

(67.) Stratton, 23 Media L. Rep. (BNA) at 1797 (citing Auvil v. CBS "60 Minutes," 800 F. Supp. 928, 931-32 (E.D. Wash. 1992)).

(68.) Auvil, 800 F. Supp. at 930-31.

(69.) See id. at 931.

(70.) See id.

(71.) Id., quoted in Stratton, 23 Media L. Rep. (BNA) at 1797.

(72.) Stratton, 23 Media L. Rep. (BNA) at 1797-98.

(73.) Id. at 1796.

(74.) The evidence before the Stratton court showed that Prodigy's Board Leaders and software screening program were used to censor offensive language, solicitation, bad advice, insulting or bad taste remarks, and off-topic material. See id; see also Elizabeth Corcoran, $200 Million Libel Suit Against Prodigy Dropped: On-Line Industry Had Worried About Case, Wash. Post, Oct. 25, 1995, at F2 (reporting that although online services like Prodigy "want to act responsibly, there are clear limitations in [their] ability to know what's on [their] networks").

(75.) See Jessica R. Friedman et al., A Lawyer's Ramble Down the Information Superhighway, 64 Fordham L. Rev. 697, 799 n.581 (1995) (explaining that after Stratton, "[i]t appears that courts will classify online service providers as distributors only if they do not take any such steps" to censor material transmitted on their networks).

(76.) The text of the Stratton opinion suggests that the court may have borrowed from common carrier principles in finding Prodigy subject to strict liability for exercising content control. Specifically, the court stated that Prodigy's "conscious choice to gain the benefits of editorial control has opened it up to a greater liability than CompuServe and other computer networks that make no such choice." Stratton, 23 Media L. Rep. (BNA) at 1798. This statement implicitly reflects the type of tradeoff principles found in the regulation of common carriers, whereby a regulated entity is required to forego all control over transmitted content and provide unfettered access to all, in exchange for a privilege against liability for the transmitted content. See Perritt, supra note 39, at 73-75.

If the Stratton court did, indeed, borrow from common carrier principles in rendering its decision, its reliance was improper. The common carrier model is inappropriate for online service providers because it was developed to address the problem of natural monopolies, such as telephone companies and railroads exploiting their position by arbitrarily denying access to their service and, in so doing, thwarting competition. See id. When a monopoly controls content, there is a distinct danger that competition and the free exchange of ideas will ultimately suffer. The opposite is so in the case of online service providers, which are far from monopolies and whose content control allows them to develop and differentiate their services, thereby promoting competition.

(77.) At common law, defamation was a tort of strict liability. Cognizant of the chilling effect of strict liability on First Amendment rights of free expression, the Supreme Court in a series of opinions altered the common law and held that states may not impose defamation liability in the absence of fault. See Gertz v. Robert Welch, Inc., 418 U.S. 323, 347 (1974); New York Times Co. v. Sullivan, 376 U.S. 254, 254 (1964).

(78.) See Kennedy & Davids, supra note 66, at B9.

(79.) See David P. Miranda, Defamation in Cyberspace: Stratton Oakmont, Inc. v. Prodigy Services Co., 5 Alb. L.J. Sci. & Tech., 229, 235 (1996) (noting that the threat of continuous liability imposed upon online service providers by Stratton would "forc[e]" providers "to abandon all protective measures," and transform all discussion groups into "an unmoderated free-for-all").

(80.) See id.

(81.) See Richard P. Hermann, II, Who is Liable for On-Line Libel?, 8 St. Thomas L. Rev. 423, 441 (1996) (recognizing that the Stratton ruling could require on-line providers to abandon control of their systems or "investigate and evaluate the truth and accuracy of every message posted on their electronic bulletin boards prior to posting").

(82.) See David J. Conner, Cubby v. CompuServe, Defamation Law on the Electronic Frontier, 2 Geo. Mason Indep. L. Rev. 227, 241 (1993) (noting that computer bulletin board system operations would sustain an "excessive burden" if held "liable as a general rule when they screen the content of their publications").

(83.) See David Loundy, E-Law: Legal Issues Affecting Computer Information Systems and System Operator Liability, 12 Computer L.J. 101, 148 (1993) (recognizing that if a "know or reason to know" standard were applied to computer information systems, "[l]arger commercial services would have to either increase costs to the users or decide that providing some services are no longer worth the expense").

(84.) See Lim, supra note 24, at 310.

(85.) See Loundy, supra note 83, at 148.

(86.) See id.

(87.) See Conner, supra note 82, at 237.

(88.) See Lim, supra note 24, at 310; Loundy, supra note 83, at 148.

(89.) See ACLU v. Reno, 929 F. Supp. 824, 883 (E.D. Pa. 1996).

(90.) See Miranda, supra note 79, at 235.

(91.) See Jessica R. Friedman, Libel in Cyberspace, Folio, Sept. 1, 1995, at 57, 61 (forecasting that online service providers abdicating all editorial control would likely be "blindsided" by consumer dissatisfaction with the nature or quality of material appearing on their networks).

(92.) Ironically, the Stratton court acknowledged the value of content control by online providers in stating that "[p]resumably, [Prodigy's] decision to regulate the content of its bulletin boards was in part influenced by its desire to attract a market it perceived to exist consisting of users seeking a `family oriented' computer service." Stratton Oakmont, Inc. v. Prodigy Services Co., 23 Media L. Rep. (BNA) 1794, 1798 (N.Y. Sup. Ct. 1995). The court failed to explain, however, why Prodigy's desire to reach a certain market justifies the imposition of strict defamation liability. If Prodigy's regulation of content to serve a "family-oriented" market justifies strict liability, then shouldn't a newsstand's selection of "family-oriented" periodicals to sell and refusal to carry pornographic material subject it to strict liability? In such a case, hasn't the newsstand "uniquely arrogated to itself the role of determining what is proper for its [customers] ... to read," just as Prodigy has done? See id. at 1797.

(93.) The purpose of the First Amendment is to assure the "unfettered interchange of ideas for the bringing about of political and social changes desired by the people." New York Times Co. v. Sullivan, 376 U.S. 254, 269 (1964). Courts have long recognized the dangerous chilling effect on freedom of speech associated with strict liability standards applied to communications media. See, e.g., Smith v. California, 361 U.S. 147, 152 (1959) (holding unconstitutional an ordinance imposing strict liability on a bookseller possessing obscene material, because imposing liability on booksellers "even though they had not the slightest notice of the character of the books they sold," would substantially chill First Amendment rights); O'Brien v. Western Union Tel. Co., 113 F.2d 539, 542 (1st Cir. 1940) (recognizing that the effect of strict liability for defamatory messages on telegraph companies "could only result in delayed transmission of, and in some cases refusal to transmit, messages which the courts after protracted litigation might ultimately determine to have been properly offered for transmission").

(94.) The idea that the most valuable features of the online medium would be abrogated if online providers were held liable for others' content was expressed in the obscenity context by the court in ACLU v. Reno, 929 F. Supp. 824 (E.D. Pa. 1996). In characterizing the effects of the indecency provisions of the Telecommunications Act, the Reno court stated that:

As some speakers leave or refuse to enter the medium, and others

bowdlerize their speech or erect the barriers that the Act envisions, and

still others remove bulletin boards, Web sites, and newsgroups, adults

will face a shrinking ability to participate in the medium. Since much of

the communication on the Internet is participatory, i.e., is a form of

dialogue, a decrease in the number of speakers, speech fora, and

permissible topics will diminish the worldwide dialogue that is the

strength and signal achievement of the medium.

Id. at 879.

(95.) After settling their dispute with an apology from Prodigy, the parties to the Stratton case sought to negate the precedential effect of the decision, which was widely feared by the online community. See Corcoran, supra note 74, at F2. Although Stratton Oakmont agreed not to oppose Prodigy's motion to reargue the motion for summary judgment, and a court reversal was expected, the Stratton court unexpectedly entered an order on December 11, 1995, denying Prodigy's motion and refusing to vacate its earlier decision. See Stratton Oakmont, Inc. v. Prodigy Servs. Co., 24 Media L. Rep. (BNA) 1126, 1128 (N.Y. Sup. Ct. 1995); see also Corcoran, supra note 74, at F2. The court found that it would not "be advisable to allow private parties to demand that the Court eradicate precedent which they personally find unacceptable on threat of burdensome litigation should the Court refuse." Id. at 1127 (citing Paramount Communications v. Gibraltar Casualty Co., 623 N.Y.S.2d 850 (N.Y. App. Div. 1995)). Thus, the Stratton decision's precedential effect survived the parties' settlement.

(96.) See 47 U.S.C.A. [subsections] 101-710.

(97.) Title V of the Act includes the provisions of the Communications Decency Act of 1996. Under these provisions, "[w]hoever--(1) in interstate or foreign communications--(A) by means of a telecommunications device knowingly--(i) makes ... and (ii) initiates the transmission of ... any ... communication which is obscene ... or indecent,.. (B) ... knowing that the recipient of the communication is under 18 years of age" shall be fined or imprisoned, or both. 47 U.S.C. [sections] 223(a)(1)(2).

(98.) 47 U.S.C. [sections] 223(e)(5)(A).

(99.) ACLU, 929 F. Supp. at 824.

(100.) See id. at 849.

(101.) See id. at 883.

(102.) 47 U.S.C. [sections] 230(c)(1). The Good Samaritan Provision contains a second part which is less important here. It provides, in relevant part, that no provider shall be liable for "any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected." 47 U.S.C. [sections] 230(c)(2). The language used in this section is ambiguous, particularly in the use of the term "otherwise objectionable." Because of this ambiguity, it is unclear what type of screening will invoke the defense. However, the overall tenor of the passage suggests that its scope is limited to obscene or indecent material. Thus, the second part of the Good Samaritan provision appears to protect online providers from liability premised on content control only when that content control is specifically aimed at screening obscene or indecent matter. See A Briefing on Public Policy Issues Affecting Civil Liberties Online, CDT Policy Post, August 4, 1995 <http://www.cdt.org/publications/pp230804.html> (stating that the Good Samaritan Provision would overturn Stratton, "because the service screens for sexually explicit material and language"). Since many online providers exercise content control over their systems in capacities other than screening for sexually explicit material, e.g., screening for off-topic material, this part of the Good Samaritan defense is of little use to providers seeking to avoid liability for content control.

(103.) See Peter Brown and Richard Raysman, Liability of Internet Access Provider Under Decency Act, N.Y.L.J., Mar. 12, 1996, at 3.

(104.) H.R. REP. No. 104-458, at 194 (1996), reprinted in 1996 U.S.C.C.A.N. 10, 207-08. House Con. Rep. No. 104-458, at 194.

(105.) See supra text accompanying notes 12 and 40-43 for a discussion of distributor liability.

(106.) See supra note 12 and accompanying text.

(107.) 776 F. Supp. 135, 140 (S.D.N.Y. 1991).

(108.) See id. at 141.

(109.) Courts also may erroneously borrow from Stratton's common carrier analogy to justify subjecting providers to liability for exercising any content control. See supra note 75 and accompanying text.

(110.) See H.R. Rep. No. 104-458, at 194.

(111.) Liability here refers to liability based on "reason to know." The author does not purport to free an online provider from liability for defamatory material that the provider has actual knowledge of via content control efforts. See infra notes 122-124 and accompanying text.

(112.) See supra note 19 and accompanying text.

(113.) See id.

(114.) See Thomas Brooks, Catching Jellyfish in the Internet: The Public-Figure Doctrine and Defamation on Computer Bulletin Boards, 21 Rutgers Computer & Tech. L.J. 461, 482 (1995).

(115.) As stated earlier, the relative absence of content filtering is most pronounced in the case of "mom and pop" bulletin boards, which, because of their smaller staffs and fewer resources, are particularly vulnerable in a strict liability regime. See Lim, supra note 24, at 312.

(116.) See infra note 120.

(117.) See Robert Charles, Computer Bulletin Boards and Defamation: Who Should be Liable? Under What Standard?, 2 J.L. & Tech. 121, 145 (1987) (noting that anonymity and instantaneous transmission online create "a potentially greater risk to the enjoyment and maintenance of a good reputation than is posed by other media"). While the distinctive characteristics of the online medium facilitate more unlawful expression than in other media, the detrimental impact of such greater unlawful expression, at least in the defamation context, may not be as pronounced as in traditional media. See infra notes 120, 161-164 and accompanying text.

(118.) Emphasizing remedies for online abusive expression and deterrence is important because it addresses both of the twin aims of tort law that underlie defamation liability: "compensating injury to the particular interest, and discouraging conduct that risks injury to the interest." Perritt, supra note 39, at 95 n. 143.

(119.) See Berman & Weitzner, supra note 32, at 1624 (praising decentralized networks for allowing widespread access unfettered by censors and screeners).

(120.) A model for online liability can be structured in two general ways, implementing centralized "top down" rules through legislative enactment or judicial decision, or a "bottom up" approach emphasizing self-help, contracts, private association rules, and the development of customs. See I. Trotter Hardy, The Proper Legal Regime for "Cyberspace," 55 U. Pitt. L. Rev. 993, 1054 (1994). Hardy favors the latter approach, because it is more in sync with the decentralized nature of cyberspace, allowing those actively involved in the medium (and who have the most knowledge about the medium) to adapt their behavior as the medium evolves. See id. (noting that "the most flexible, least intrusive rule-making process is best because communications technology is changing so rapidly"). The model presented by this Article is a blending of the two approaches, with a nonmandatory legislative component offering incentives for online providers to avoid liability through the development of creative measures to deter and remedy unlawful online expression.

(121.) 47 U.S.C. [sections] 230(c) (1996).

(122.) See supra note 12 and accompanying text.

(123.) The infeasibility of pre-screening transmissions is especially pronounced in the defamation context. Unlike vulgar language, defamation is a subtle and complex legal construct which is not readily ascertainable. See Conner, supra note 82, at 242 (asserting that defamation is very difficult to define legally and is "inherently more difficult to define with certainty" than obscenity); Auvil v. CBS "60 Minutes," 800 F. Supp. 928, 931-32 (E.D. Wash. 1992) (finding that imposing a duty to censor defamatory material would be unrealistic because it "would force the creation of full time editorial boards at local stations throughout the country which possess sufficient knowledge, legal acumen and access to experts to continually monitor incoming transmissions and exercise on-the-spot discretionary calls or face $75 million lawsuits at every turn"). To further complicate the process of screening for defamation, one form of such liability, defamation per quod, may be imposed where the defamation is not even apparent from the words themselves but is based on facts extrinsic to the matter published. See Brooks, supra note 114, at 471 n.85 (finding a newspaper report that the plaintiff had given birth established libel per quod if the plaintiff could prove that she had been married only one month before the reported birth (citing Morrison v. Ritchie & Co., 4 F. 645, 649 (1902)).

(124.) See Lerman v. Flynt Distrib. Co., 745 F.2d 123, 139 (2d Cir. 1984) (holding that "[w]hen a distributor acts with the requisite scienter in distributing materials defaming or invading the privacy of a private figure, it must be subject to liability") (quoting Lewis v. Time, Inc., 83 F.R.D. 455, 464 (E.D. Cal. 1979)).

(125.) See, e.g., Sega Enter. v. Maphia, 857 F. Supp. 679, 686-87 (N.D. Cal. 1994) (holding BBS liable in copyright infringement case where it knowingly copied, directed and encouraged unlawful posting of copyrighted material).

(126.) Restatement (Second) of Torts [sections] 577(2) (1977).

(127.) Each communication of a defamatory statement to a third person constitutes a new publication which gives rise to a cause of action. Thus, "one who repeats or otherwise republishes defamatory matter is subject to liability as if he had originally published it." Cianci v. New Times Publ'g Co., 639 F.2d 54, 61 (2d Cir. 1980) (quoting Restatement (Second) of Torts [sections] 578 (1977)).

(128.) Restatement (Second) of Torts [sections] 577 cmt. p (1977). The duty is explained as follows:

[T]he duty arises only when the defendant knows that the defamatory

matter is being exhibited on his land or chattels, and he is under no duty

to police them or to make inquiry as to whether such a use is being made.

He is required only to exercise reasonable care to abate the defamation,

and he need not take steps that are unreasonable if the burden of the

measures outweighs the harm to the plaintiff. In extreme cases ... the

defendant may not be required to take any action at all. But when, by

measures not unduly difficult or onerous, he may easily remove the

defamation, he may be found liable if he intentionally fails to remove it.

Id.

(129.) To establish a prima facie case of defamation, the aggrieved person would be required to present facts showing that the content is false, unprivileged, and injurious to her reputation, and that the allegedly defamatory matter was transmitted by the provider. See Restatement (Second) of Torts [sections] 577 (1977).

(130.) See Perritt, supra note 39, at 132-133 (noting that a disadvantage of private enforcement by online service providers is that "[i]f providers of network services face potential liability for the content of traffic carried on their networks, they will be quick to cut off anyone whose activities might give rise to liability"). It follows that if provider liability could not attach unless the claimant first set forth facts establishing her claim, providers would not feel compelled to remove material every time a claim is made to avoid potential liability.

(131.) For an annotated list of states with retraction statutes, see W.E. Shipley, Annotation, Validity, Construction, and Application of Statute Limiting Damages Recoverable for Defamation, 13 A.L.R.2D 277, 287, [sections] 5 (1950). The duty to request a retraction is also found in state common law. See Donna M. Murasky, Avoidable Consequences in Defamation; The Common-Law Duty to Request a Retraction, 40 Rutgers L. Rev. 167 (1987).

(132.) See, e.g., Driscoll v. Block, 210 N.E.2d 899, 910 (Ohio App. 1965) (holding that a plaintiffs retraction demand to a newspaper was insufficient under the statutory requirement for failure to set forth under oath the truth pertaining to the statement).

(133.) 907 F. Supp. (1361 (N.D. Cal. 1995).

(134.) Contributory copyright infringement is established where the defendant, "with knowledge of the infringing activity, induces, causes or materially contributes to the infringing conduct of another." Id. at 1373 (quoting Gershwin Publ'g. Corp. v. Columbia Artists Mgmt., Inc., 443 F.2d 1159, 1162 (2d Cir. 1971)).

(135.) See id at 1374. In a more general sense, the Netcom case supports this Article's view that content liability for online service providers should focus on post-transmission actions. In a departure from existing copyright law, which holds defendants strictly liable for direct infringement, the Netcom court held that the defendant BBS could not be held liable for direct infringement without evidence of the BBS's volition. See id. at 1372. The court did "not find workable" a theory of infringement that holds mere conduits liable for activities that cannot reasonably be deterred: "Billions of bits of data flow through the Internet and are necessarily stored on servers throughout the network and it is thus practically impossible to screen out infringing bits from noninfringing bits." Id. The Netcom court found it more appropriate to analyze provider liability under the rubric of contributory infringement, not direct infringement. Contributory infringement recognizes the realities of the medium by focusing attention on the BBS-subscriber relationship and the way that imposing liability on BBS operators may shape this relationship in order to deter the real culprit--the subscriber posting the infringing material. See id. at 1369 (citing Niva Elkin-Koren, Copyright Law and Social Dialogue on the Information Superhighway: The Case Against Copyright Liability of Bulletin Board Operators, 13 Cardozo Arts & Ent. L.J. 346, 363 (1995)).

The issue of strict copyright infringement liability for online service providers has been hotly debated. Following the September 1995 issuance by the Clinton administration of the final Report of the Working Group on Intellectual Property Rights, Congress introduced a bill entitled the National Information Infrastructure (NII) Copyright Protection Act of 1995, S. 1284, 104th Cong., 1st Sess. (1995); H.R. 2241, 104th Cong., 1st Sess. (1995). See Bruce A. Lehman, Information Infrastructure Task Force, Intellectual Property and the National Information Infrastructure: The Report of the Working Group on Intellectual Property Rights (1995) [hereinafter White Paper]. Among other White Paper recommendations followed by the proposed legislation is the imposition of strict liability for direct copyright infringement on online service providers. See Jonathan Band, Online Service Provider Liability, 20 Int'l Com. Litig. 35 (1996). In part because of the outpouring of protest by the online industry concerning the strict liability provisions, the House Courts and Intellectual Property Subcommittee markup of the bill was indefinitely postponed in June 1996, effectively ending the bill's progress for the year. See NII Copyright Bill Likely Dead for the Year, Wash. Telecom News, June 17, 1996.

(136.) See supra note 76.

(137.) Maintaining a flexible reasonableness analysis to account for varying burdens on online providers is consistent with the Restatement rule for defamatory matter displayed on one's property. See Restatement (Second) of Torts [sections] 577 cmt. p (1977). Under the Restatement rule, one who has learned of defamatory matter displayed on his property need only take measures which are "not unduly difficult or onerous" under the circumstances. See id.

(138.) A "presumption" is a rule of law that attaches definite probative value to specific facts. 29 Am. Jur. 2D Evidence [sections] 160 (1994). The presumptions suggested by this Article are presumptions of law, as opposed to fact, because they represent conclusions or inferences (of reasonableness in the legal sense) to be drawn from given facts (removal of offending material or provision of rebuttal access). The suggested presumptions are rebuttable, i.e., they have the force of proof until overcome by contradictory evidence. The degree of proof required to rebut a presumption varies. Some courts require a mere preponderance of evidence. See, e.g., Strickland v. Strickland, 39 S.E.2d 483 (Ga. 1946). Other courts require "clear, distinct, positive, and satisfactory proof." See, e.g., Carter v. Graves, 56 S.E.2d 917 (Ga. 1949). So as to foster certainty and predictability for online providers wishing to comply with the Good Samaritan provision, Congress should prescribe the standard for overcoming the presumptions to be one of clear and convincing evidence. Whatever standard is used, the question of whether a presumption has been rebutted is for the jury. See Gibson v. Gibson, 187 S.E. 155 (Ga. App. 1936).

These suggested statutory presumptions should withstand constitutional scrutiny. A statutory presumption is valid if the inference from the fact proven is not purely arbitrary, unreasonable or unnatural, and the evidentiary fact has some fair relation or natural connection with the fact to be proved and some tendency to prove it. See Tot v. United States, 319 U.S. 463, 467-68 (1943). Because removal of offending material destroys it, and rebuttal exposes its falsity, the constitutional standard should easily be satisfied.

(139.) Courts generally have considered reply to be a better remedy for defamation than monetary damages, because it more directly repairs the damaged reputation, avoids costly litigation, and promotes the type of vigorous debate that is at the heart of the First Amendment. See, e.g., Gertz v. Robert Welch, Inc., 418 U.S. 323, 344 (1973) (stating that rebuttal is the "first remedy" of any victim of defamation); Reuber v. Food Chem. News, Inc., 925 F.2d 703, 708-09 (4th Cir. 1991) (reversing defamation judgment for plaintiff who had rebuttal access but made no attempt at rebuttal because "rebuttal of offending speech is preferable to recourse to the courts"). In traditional media, however, only public figures and officials have had sufficient access to make rebuttal an effective remedy.

(140.) In its statutory definition of "presumptive evidence," Congress could specify that the presumptions created are rebuttable, and that the standard of proof to overcome the presumptions is one of "clear and convincing" evidence. See supra note 138.

(141.) Under current law, online providers may be precluded from removing certain types of online material. For instance, Chapter 119 of the Electronic Communications Privacy Act (ECPA) prohibits the interception or disclosure of private electronic communications such as e-mail. 18 U.S.C. [sections] 2511(1) (1994). The EPCA contains several exceptions which are not completely adequate to protect providers from the dilemma of trying to carry out a duty to remove defamatory material which cannot legally be removed under the ECPA. One of these exceptions holds that interceptions may be authorized if done pursuant to a court order. See id. at [sections] 2511(2)(a)(ii). To bring the EPCA into harmony with the amendments to the Telecommunications Act proposed by this Article, Congress should broaden this exception to include interceptions made in response to a legitimate claim of defamation after a prima facie showing of defamatory character.

(142.) An incentive is preferable to mandating the removal of allegedly defamatory material because it is more flexible and does not involve state action. Without state action, the provider's removal of allegedly defamatory material does not constitute a prior restraint in violation of the First Amendment. See Carlin Communications, Inc. v. Mountain States Tel. & Tel. Co., 827 F.2d 1291, 1297 (9th Cir. 1987) (holding that decisions of a communication service provider are not state action unless the provider acts pursuant to an affirmative government mandate and finding that Mountain Bell's contractual prohibitions against carrying dial-a-porn service did not constitute state action); see also CBS v. Democratic Nat'l Comm., 412 U.S. 94, 114-21 (1973) (finding that broadcaster's refusal of political advertisements under FCC rule permitting such refusal was not state action for First Amendment purposes).

(143.) See Restatement (Second) of Torts [sections] 577(2) (1977); see supra note 128.

(144.) See, e.g., infra note 158 and accompanying text.

(145.) Restatement (Second) of Torts [sections] 577(2) (1977).

(146.) See, e.g, Hellar v. Bianco, 244 P.2d 757, 759 (Cal. Dist. Ct. App. 1952) (once the proprietor or controller of a premises has notice that defamatory matter is present in his facility, failure to remove the defamation within a reasonable period of time constitutes "republication" for which the proprietor or controller can be held liable).

(147.) See, e.g., Ga. Code Ann. [sections] 51-5-11(a) (providing that in any civil action for libel against a media defendant in Georgia, the defendant's publishing of a retraction or editorial rebuttal "shall be relevant and competent evidence" entitling the defendant to limited liability).

(148.) Of course, the removal of defamatory material from an online provider's network does not totally obliterate the material, since some users may have downloaded the material prior to the removal. Moreover, users who saw the offending material prior to removal may not notice its removal or understand the significance thereof. In this respect a retraction is superior to removal since it is more likely to alert users to the falsity of materials they have already seen.

(149.) See, e.g., Gilbert v. CSX Transp., 397 S.E.2d 447, 449 (Ga. App. 1990) (finding that evidence of standard industry custom for overhead-loaded trucks to be equipped with cab shields was admissible on the issue of the trucking company's negligence for failure to do so).

(150.) As discussed above, numerous technological and legal factors limit the ability of providers to remove offending material from their systems. See supra notes 19-29 and accompanying text. Moreover, because messages are automatically and periodically purged from online networks to make room for new material, the removal option would not be available in a situation where the defamatory material had already been purged when the provider was notified. See ACLU v. Reno, 929 F. Supp. 824, 835 (E.D. Pa. 1996). However, in such a case, providing rebuttal access would remain an option for a provider to fulfill its duty of reasonable care.

(151.) 418 U.S. 323, 344 (1973).

(152.) See id. (citing New York Times v. Sullivan, 376 U.S. 254, 328 (1964)). The Gertz court approved the rule that public officials and public figures can recover for defamation only upon a showing that the falsehood was published with "actual malice" (as opposed to negligence), i.e., with knowledge that it was false or with reckless disregard of whether it was false. See Gertz, 418 U.S. at 328.

(153.) Gertz, 418 U.S. at 344. The Gertz court refused to extend the "actual malice" standard to private plaintiffs involving matters of public interest, in part because private plaintiffs do not normally have the access to media outlets for rebuttal that public figures and officials enjoy. See id.

(154.) An example of such state laws is Georgia's retraction statute, which provides that any print or broadcast media entity that publishes or broadcasts a retraction and/or editorial rebuttal within seven days after receiving demand (three days for broadcast entities), or in the next regular issue following demand, is not subject to punitive or special damages with respect to the published defamation. Ga. Code Ann. [subsections] 51-5-11, 5-12 (1982 & Supp. 1995). Moreover, the defendant may plead the retraction or rebuttal in mitigation of whatever actual damages are assessed. See id.; see also Cal. Civ. Code [sections] 48a(1) (West 1982) (under California law, a plaintiff may not recover general damages from a publisher or broadcaster unless a correction or retraction is demanded within 20 days after learning of the defamation, and such demand is refused).

(155.) See Counts & Martin, supra note 30, at 1087 (observing that "[u]nlike the printed forms of communication, in which space constraints limit the `news hole,' no external forces in cyberspace limit volume").

(156.) The key justification for limiting provider liability in exchange for rebuttal access is the requirement that the defamed person be allowed to reply in the same forum that the defamation occurred. In Reuber v. Food Chem., Inc., the Fourth Circuit Court of Appeals found that the most significant factor warranting the plaintiffs treatment as a public figure was that he had access to the "fora where [his] reputation was presumably tarnished and where it could be redeemed." 925 F.2d 703, 708 (4th Cir. 1991). Without the cooperation of online providers, persons defamed online generally would not have sufficient access to justify limiting provider liability. This is because the person would not necessarily be a subscriber to the forum in which the defamation occurred and would have to incur the delay and expense of subscribing to such forum in order to reply to the defamation. See Brooks, supra note 114, at 480.

(157.) Online rebuttal is quicker than in traditional media because online fora such as computer bulletin board services (BBSs) do away with the intermediary of an editor. See Brooks, supra note 114, at 482. The defamed person can post her rebuttal without a third party's review, whereas one replying to defamation in a printed publication would first have to confront gatekeepers such as editors. See id Moreover, rebuttal online need not wait for printing and distribution; it is accomplished nearly instantly with a few keystrokes. See id. at 482-83.

(158.) An online provider's greater capacity for accommodating rebuttal than traditional media forms has constitutional dimensions. In Miami Herald Publ'g Co. v. Tornillo, 418 U.S. 241 (1973), the Supreme Court struck down a right-of-reply statute granting political candidates newspaper access to answer editorial criticism. The Tornillo court held that the statute violated the First Amendment for exacting a penalty on the basis of newspaper content. See id. at 241. The "penalty" perceived by the Court was the space taken up by mandated rebuttal access which, because of column space limitations, precluded other material the newspaper may have preferred to print. See id. at 256-57. The Court noted that although newspapers are not subject to the finite technological time limitations that confront broadcasters, "it is not correct to say that, as an economic reality, a newspaper can proceed to infinite expansion of its column space to accommodate the replies that a government agency determines or a statute commands the readers should have available." See id. at 257.

Since online providers are not subject to the physical space limitations of newspapers or other traditional media, and therefore rebuttal access would not impose a penalty by preventing the provider from printing other material, an online defamation right-to-reply law might survive Tornillo. However, the Tornillo court alternatively held that the right-to-reply law was unconstitutional for intruding into newspapers' editorial control and judgment, and this reasoning presumably would apply to online providers as well. See id at 244. Thus, to avoid any potential infringement of providers' First Amendment rights, and also in keeping with the "bottom up" focus of the model proposed by this Article, online rebuttal access should be structured as an optional incentive for providers to limit their liability, rather than an affirmative statutory mandate.

(159.) See ACLU, 929 F. Supp. at 877 (noting that "the Internet provides significant access to all who wish to speak in the medium, and even creates a relative parity among speakers").

(160.) 418 U.S. 323, 344 (1973); see also supra notes 153, 154 and accompanying text. In applying the reasoning of Gertz, one might ask why this Article advocates a presumption against liability for providers who furnish rebuttal access, rather than applying the "actual malice" standard of Gertz to such providers. The reason is, as a practical matter, that there is likely to be little difference between applying a presumption against liability and an actual malice standard where the provider has furnished rebuttal access. Many courts, for example, have found that a publisher's willingness to issue a retraction shows that the publisher did not act with actual malice in publishing the original statement. See, e.g., Bryant v. Associated Press, 595 F. Supp. 814, 818 (D.V.I. 1984) (granting summary judgment to the defendant newspaper's because "upon being informed of the error, the paper took immediate steps to remedy same with a correction"); Powell v. Toledo Blade Co., No. 91-1550, 1991 WL 321960, at *3, 7 (Ohio Ct. C.P. Sept. 18, 1991) (holding that a newspaper's voluntary correction and apology after publishing a photograph of the wrong person was evidence that the newspaper was not guilty of malice); Cape Publications, Inc. v. Teri's Health Studio, Inc., 385 So.2d 188, 190 (Fla. Dist. Ct. App. 1980) (stating that if a "newspaper prints a full and fair retraction . . . the defamed person must prove malice, bad faith or a reckless disregard for the truth or falsity of the story"). Thus, under either the actual malice standard or the presumption against liability standard, the fact that a provider furnishes rebuttal access to legitimate claimants weighs heavily against a finding of liability. For a suggestion that ready rebuttal access may turn every cyberspace plaintiff into a public figure for libel law purposes, see Mike Godwin, Libel, Public Figures, and the Net, Internet World, June 1994, at 62, 64.

(161.) The effect of institutional backing on credibility and reputational harm has been acknowledged in defamation cases. For example, in Immuno AG v. Moor-Jankowski, 567 N.E.2d 1270 (N.Y. 1991), the court dismissed a libel claim arising out of a letter to the editor of a scientific journal, finding that such letters, "unlike ordinary reporting, are not published on the authority of the newspaper or journal." The court recognized that without authoritative backing, statements are perceived as less credible, and therefore engender less reputational harm. See id

(162.) See Hardy, supra note 120, at 1049.

(163.) See id.

(164.) In many defamation cases, evidence of the effect of the allegedly defamatory matter upon the minds of individual hearers has been held admissible on the question of damages. See Mattox v. News Syndicate Co., 176 F.2d 897 (2d Cir. 1949); Foster-Milburn Co. v. Chinn, 120 S.W. 364 (Ky. 1909); Van Lonkhuyzen v. Daily News Co., 161 N.W. 979 (Mich. 1917).

(165.) The rebuttal access option is a particularly effective tool for advancing First Amendment interests because it promotes free expression in two ways: (1) it diminishes the "overkill" tendency of liability-averse providers to remove any matter complained of regardless of the merits of the complaint by providing an alternative to removing the allegedly offensive material; and (2) it promotes vigorous debate by facilitating responses to defamatory accusations.

(166.) Gertz v. Robert Welch, Inc., 418 U.S. 323, 340 n.8 (1973) (quoting Thomas Jefferson's first Inaugural Address).

(167.) For the view that enhanced rebuttal access online should totally immunize online providers from liability and "leave the whole defamation matter to unilateral self-help," see Hardy, supra note 120, at 1042. This author finds Hardy's suggestion somewhat extreme, since without the cooperation of providers, defamed persons cannot necessarily respond in the same forum quickly and easily. See supra note 120 and accompanying text.

(168.) For instance, retraction statutes often require that the retraction be "in as conspicuous and public a manner as that in which the alleged libelous statement was published." See Ga. Code Ann. [sections] 51-5-11 (1982 & Supp. 1995). This supports the idea that a rebuttal under the proposed model must be at least as prominent as the defamation was, i.e., same forum, equal space and accessibility. Retraction statutes may also be consulted to determine what constitutes a "reasonable time" for a provider to grant rebuttal access. See, e.g., id. [subsections] 51-5-11, -5-12 (in order to limit liability, a retraction in print must occur within seven days after receiving demand, or in the next regular issue; a broadcast retraction must occur within three days); Van Duzer v. Bourisseau, 179 N.W.2d 214 (Mich. Ct. App. 1970) (finding that the question of whether four days was allowance of a reasonable time for retraction was a question of fact).

(169.) A general principle to be drawn from right-to-reply statutes is that of equal space. In Miami Herald Publ'g Co. v. Tornillo, a newspaper right-to-reply law for political candidates guaranteed the offended candidate rebuttal space equal to that of the criticism that necessitated the rebuttal. 418 U.S. 241, 241 (1973). The statute considered in Tornillo was held to violate the newspaper's First Amendment rights on the basis that the mandated rebuttal space took up limited column space and precluded other material from being printed. This rationale does not apply to the online context, however, where space is virtually limitless. See supra note 158. Thus, the equal space principle of the statute considered in Tornillo would appear to be fair and valid as applied in cyberspace.

(170.) While online anonymity does inhibit tracking of defamation creators, it should be recognized that such anonymity serves a number of valid interests. Anonymity allows subscribers to voice opinions without fear of retribution or harassment, thereby promoting the type of robust debate envisioned by the First Amendment. See Talley v. California, 362 U.S. 60, 64-65 (1960) (holding that the right to publish anonymously is protected by the First Amendment); ACLU v. Reno, 929 V. Supp. 824, 849 (E.D. Pa. 1996) (noting that anonymity is important to online users who seek access to AIDS, homosexuality, rape and other forums dealing with stigmatized topics). It also protects providers' legitimate proprietary interest in their subscriber lists which would otherwise be freely available to competitors and other organizations looking for mailing lists. See Charles, supra note 117, at 136. Moreover, as stated above, it is arguable that online anonymity actually serves the interest of the defamed person, because anonymous remarks are devalued by the public, and thus are less likely to actually injure one's reputation. See Hardy, supra note 120, at 1049.

(171.) See Charles, supra note 117, at 136.

(172.) While subscriber identifiability is common with larger commercial services, it is often beyond the financial ability of small non-commercial providers. See ACLU, 929 F. Supp. at 847 (noting that "it would not be feasible" for many non-commercial organizations to implement access code systems). Thus, when considering the liability of such a small service, message tracking would assume a less prominent role in the reasonableness inquiry.

(173.) Identifying non-subscribers, however, is far more difficult under current technology. One example of this difficulty is the fraudulent misappropriation and use of a subscriber's account code by an unidentified non-subscriber. This is exactly what occurred in the Stratton case. See 23 Media L. Rep. (BNA) 1794, 1795 (N.Y. Sup. Ct. 1995); see also Peter H. Lewis, Libel Suit Against Prodigy Tests On-Line Speech Limits, N.Y. Times, Nov. 16, 1994, at D1. Additionally, the authorship of Internet content transmitted by a provider is frequently undetectable. A major obstacle in this regard is the existence of "anonymous remailers," which are intermediary computers that strip off the sender's name and address, thereby rendering the message anonymous. See Hardy, supra note 120, at 1011. Regulation of these intermediary computers would be difficult absent an international treaty or some form of international forum where defamed victims could go for redress, in cases where anonymous remailers are located in foreign countries. See id at 1011, 1051-53.

(174.) An example of providers' willingness to identify creators of defamation is the settlement between Prodigy and Stratton Oakmont in the Stratton case. In that case, the offender was an unidentified user of an employee's account code. As part of the settlement, Prodigy agreed to try to track down the user who had posted the offending messages. See Lewis, supra note 26, at A1.

(175.) See Cook, supra note 1, at 60.

(176.) See Religious Tech. Ctr. V. Netcom On Line Communications Serv., Inc., 907 F. Supp. 1361, 1376 (N.D. Cal. 1995) (noting that defendant Internet access provider and bulletin board service had suspended subscribers' accounts on over one thousand occasions for commercial advertising, posting obscene materials, and off-topic postings). Under the model proposed by this Article, providers would also have an incentive to suspend habitual offenders on the ground that if a provider had notice of multiple abuses by a subscriber and yet allowed the subscriber to remain on the system, such acquiescence could be construed as a priori actual knowledge of any further defamatory transmissions by the subscriber, thereby triggering the provider's liability for any such further abuses. See Sega Enterprises Ltd. v. MAPHIA, 857 F. Supp. 679, 686-687 (N.D. Cal. 1994) (emphasizing that even though the defendant bulletin board service did not know exactly when each subscriber copyright infringement occurred, its policy of tolerating such infringement constituted knowledge of the infringing activity sufficient to impose contributory infringement liability); Perritt, supra note 39, at 107 (noting that "if special circumstances were present, such as the fact that the operator knew of the user's repeated transmission of defamatory messages ... the court may impute knowledge").

(177.) Several commentators have suggested that providers should be held liable for failing to take reasonable care in identifying the person who wrote the offending message. See Alison Frankel, On-Line, On the Hook Am. Law., Oct. 1995, at 59, 62 (citing a suggestion by David Post, associate professor of law at Georgetown University Law Center); Charles, supra note 117, at 147 (suggesting that a provider be subject to defamation liability if it "negligently fails to establish an identification code such that subscribers to the bulletin board are known to the operator by name and address"). The proposed model is consistent with these suggestions because it expressly includes efforts to identify the culprit as a factor in the reasonableness inquiry. However, the model stops short of requiring providers to implement message identifiability measures, in recognition of the fact that many small non-commercial online providers do not have the necessary resources to do so, and would likely be forced to shut down if faced with such a requirement. See ACLU v. Reno, 929 F. Supp. 824, 847 (E.D. Pa. 1996).

(178.) While the shallow pockets of individual subscribers is certainly a factor to be considered in fashioning remedies for persons defamed online, this concern is tempered somewhat by the fact that many private BBS operators are as impecunious as their users. See Eric C. Jensen, An Electronic Soapbox: Computer Bulletin Boards and the First Amendment, 39 Fed. Com. L.J. 217, 221 (1987).

(179.) The definition of "subscriber financial responsibility policies" could be defined broadly in an amended Telecommunications Act, to allow room for providers to develop creative measures as technology and market conditions change.

(180.) See White Paper, supra note 135, at 122-24 (recommending a copyright strict liability paradigm for online service providers, because strict liability would provide an incentive for providers to reduce the damage to copyright holders by educating users, requiring indemnification, purchasing insurance, and developing technological solutions to screening out infringement).

(181.) There is ample evidence that online providers are voluntarily developing industry customs and standards aimed at deterring and remedying unlawful content. In addition to suspending abusive subscribers, online providers frequently require subscribers and other third-party content creators to sign indemnity agreements shifting full responsibility for unlawful content to the creator. See Cubby, Inc. v. Compuserve Inc., 776 F. Supp. 135, 143 (S.D.N.Y. 1991). In the obscenity context, about two dozen providers have formed a coalition titled the Platform for Internet Content Selection (PICS), to develop standards for content providers to rate and label their content to facilitate parental blocking of unsuitable material. Online Firms Team Up On Technology, Wash. Post, Sept. 9, 1995, at C1. Moreover, all of the major commercial online providers offer and continue to develop new technologies that allow parents to block their children's access to indecent online material. Leahy Report, supra note 32, at 8-11. Finally, a group of online providers have founded the Virtual Magistrate Project a voluntary online arbitration tribunal which hears complaints from interested parties about allegedly offensive, unlawful, or inappropriate online content, and recommends whether the material should be deleted or restricted. See generally Virtual Magistrates Seek to Resolve Online Disputes, Information Law Alert: A Voorhees Report (Mar. 15, 1996). These initiatives and customs, which providers are voluntarily developing because it is in the industry's best interest to do so, are the very type of activities that the White Paper concluded would not occur without a strict liability regime. See White Paper, supra note 135, at 122-124.

(*) Keith Siver received his J.D. from Georgetown University Law Center in 1991 and will receive an M.A. in Journalism and Mass Communication from the University of Georgia this spring. He is currently an associate with the Atlanta law firm of Alembik, Fine & Callner. Mr. Siver wishes to thank Professor William Lee, Andrew Weisman, Esq., and the editors and staff of the Rutgers Computer & Technology Law Journal for their invaluable assistance and advice in the preparation of this Article.
COPYRIGHT 1997 Rutgers University School of Law - Newark
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1997 Gale, Cengage Learning. All rights reserved.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Author:Silver, Keith
Publication:Rutgers Computer & Technology Law Journal
Date:Mar 22, 1997
Words:19512
Next Article:Dialing for dollars: should the FCC regulate Internet telephony?
Topics:


Related Articles
PUBLIC FORUM : COMPELLING WITNESSES TO INTERVENE DURING CRIMES IS HAZARDOUS.
Does Good Samaritan Act apply to Drs. in hospital? (Medical Law Case of the Month).
New Jersey Good Samaritan law does not immunize emergency room doctors.
Union State Bank's community efforts.
Bank secures future clients.
The law of life: July 11.
N.Y. health system shows awareness of alcohol issues.

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters