Internet hate speech and the First Amendment, revisited.
I. INTRODUCTION II. EXAMINING THE CURRENT INTERNET HATE SITE CRISIS UNDER FIRST AMENDMENT JURISPRUDENCE A. The Abundant "Coincidence" of Internet Hate Websites, Hate Groups, and Hate Crimes in the United States 1. Recruitment 2. Advocating Violence 3. Statistics B. Introduction to United States' Approach to Hate Speech: Advancing the Marketplace of Ideas by Prohibiting Content-Based Restrictions 1. First Amendment Protection of Hate Speech & Two Important Exceptions to Prohibition of Content-Based Restrictions C. Internet Hate Speech in the United States: First Amendment & Federal Statutes 1. Juxtaposition Between United States' and International Approach to Hate Speech III. RETHINKING THE UNITED STATES' APPROACH TO ANONYMOUS INTERNET HATE SPEECH A. Revisiting Reno v. ACLU: Internet as an "Invasive" Medium 1. The Internet is an Invasive Medium 2. Analyzing these trends under FCC v. Pacifica and Reno v. ACLU 3. Preventing Children's Accidental Viewing of Indecent Internet Material: Creating New Generic Top-Level Domain to Aid Content-Filtering Technology B. Revisiting R.A.V. for Regulation of "Secondary Effects" of Hate Speech Websites 1. Internet Anonymity, Its Benefits, and Its Drawbacks 2. Secondary Effects of Hate Speech Websites 3. First Amendment Protection of Anonymous Speech, and its Application to the Internet IV. CONCLUSION
Heralded as one of the most important inventions by the human species, (1) the Internet has provided for the facilitation of viewpoint-sharing, personal communication, education, (2) and economic development of corporate endeavors. (3) The Internet engenders freedom of expression by convening a diverse array of people and viewpoints together for discourse in a marketplace of ideas, (4) but the relative inexpensiveness and efficiency of Internet speech, as well as the pervasiveness of its messages, creates the potential for serious abuses. (5) Actualizing this potential is anonymous Internet use, which has afforded individuals the ability to disseminate distasteful messages to vast audiences without fear of personal accountability. (6) Specifically, Internet users may and are anonymously creating hate sites--websites that engage in hate speech, advocating messages that discriminate against and promote violence or intimidation toward people of a specific religion, race, gender, national origin, sexual orientation, or ethnic background. (7) These sites aim primarily to recruit new members and to spread their messages, as well as to advocate specific acts of violence and criminal activity toward people according to one of the above characteristics. Common tactics take various forms, including offering free downloads of music with hate-filled messages, maintaining racist Internet games that allow players to murder virtual depictions of out-group members, and targeting children through cartoon characters. These messages also often contain inaccurate and misleading information aimed at convincing listeners of a dangerous revisionist history; namely, people with a specific characteristic have caused serious social and economic problems that can only be eradicated by engaging in violence against them.
This note does not advocate that these sites' expression blindly be deemed constitutionally unprotected. Instead, it proceeds to analyze how the serious need to adopt a proactive deterrent to this Internet hate speech proportionally rises with the likelihood of a society's economic and social forces leading to actualization of these sentiments. Few can argue that the United States is currently engrossed in a period significantly defined by polarized opinions on economic and social issues. Considering especially the influx of hate groups operating in America, it is important to limit the potential of Internet hate sites to cause violent action towards specific groups of people. Recent statistics undisputedly show that these hate sites are increasing steadily, and right-wing extremist group membership has also increased alarmingly. (8) This note aims to analyze what type of regulation of Internet hate sites--if any--would be permitted by United States jurisprudence in order to counteract these trends. Hate speech has historically received heavy First Amendment protection, and any potential regulation must avoid the traditional prohibition of content-based restrictions. Furthermore, there exists little guidance from the courts on whether hate speech on the Internet will be protected as staunchly.
Part I of this note will discuss the protections afforded to hate speech in the United States, as well as the contrary stance taken by other nations. Part II examines two important exceptions to the prohibition of content-based restrictions of hate speech, and how these exceptions may serve as potential bases for regulating Internet hate sites. Furthermore, Part II explores lower protection for Internet hate speech in two ways: First, Part II(A) discusses whether child Internet users can be considered "captive audiences" on account of the invasiveness of the medium, and whether indecent Internet hate sites can subsequently be restricted in ways that specifically advance their welfare without detracting from their accessibility for adults. Second, Part II(B) inquires into whether restrictions on the content of Internet hate sites could be justified on content-neutral grounds and subsequently be permitted to target whatever "secondary effects" may result from such speech.
Importantly, these analyses do not focus on whether single speakers of hate can be silenced as a form of government-sponsored censorship. Instead, this note aims to determine whether the manner of such expression can be limited, and whether potential audiences can be controlled--under the principles of the United States Constitution--in order to limit the troubling influx of hate groups and Internet hate crimes. Accordingly, the analyses will not focus on Internet hate sites from the perspective of the individual Internet user, who may be harmed by viewing hate speech directed at them. While these harmful effects cannot be ignored, this note instead focuses on the role that these sites play in increasing membership in hate groups, as well as facilitating hate crimes.
II. EXAMINING THE CURRENT INTERNET HATE SITE CRISIS UNDER FIRST AMENDMENT JURISPRUDENCE
Hate speech and hate crimes have received different treatment within the United States legal system. Although both the federal government and several states have enacted legislation to address the harmful effects of criminal activity that is specifically motivated by discrimination or hate, (9) its jurisprudence has stood relatively stagnant in the face of mere expressions of hate. Recently, however, this stagnancy has become more apparent due to the rise of right-wing extremists reacting to the increasingly turbulent political climate in the United States. (10) A huge increase in hate-driven websites has spawned, (11) and these sites have joined with the prevalence of anonymous Internet use to increase hate crimes themselves. (12) In light of this trend, Part I examines the current state of hate sites and hate groups, as well as the protections afforded to hate speech both in the United States and in other countries.
A. The Abundant "Coincidence" of Internet Hate Websites, Hate Groups, and Hate Crimes in the United States
Haunting are the words of Benjamin Nathaniel Smith, an American who in 1999 carried out a racially-motivated, murderous shooting rampage before killing himself. (13) "It wasn't really 'til I got on the Internet, [and] read some literature of these groups that ... it really all came together.... It's a slow, gradual process to become racially conscious." (14)
Just as many developing technologies, originally well intended, have spawned some negative consequences, (15) the Internet's potential for instant, utopian communication has begotten significant collateral damage. The internet has successfully helped people to communicate at an instant, and to locate and enjoy the contact of other people with similar interests in ways that may not have been available otherwise. Unfortunately, for some groups in the United States, that shared interest is hatred. Hate groups have utilized the Internet for two distinct purposes: to recruit new members to advance their cause; (16) and to advocate violence against and intolerance of the object of their hatred. (17)
First, hate groups engage in thorough efforts to recruit new members in order to rectify what they purport are dire problems caused directly by a specific group of people. These sites utilize society's general fear of social, political, and economic uncertainty to recruit these members by blaming these out-groups for any number of social, political, or economic problems. (18) Efforts are directed in large part toward impressionable people--such as those who are lonely or upset--because they are the most susceptible targets to both hate group doctrines and general Internet surfing. (19) Children and youth are particularly targeted since they are often impressionable, lonely, marginalized, and left wanting for a sense of identity and acceptance within a group. (20) In addition, children have not yet gathered the experience or education necessary to deconstruct any inaccurate or misleading information presented to them. (21)
Accordingly, hate sites are recognizing that the internet is an efficient recruitment tools and are featuring content that is more attractive to children. Typical methods of attraction include cloaking racist and xenophobic messages within music, games, activities, and cartoon characters. (22) In possibly their most deceptive method of recruitment, many hate sites are now disguised as educational websites, offering intentionally misleading "educational" content that misinforms younger viewers under the cloak of legitimate scholarship. (23) Unfortunately, this content is comprised of skewed, or false "evidence," that allegedly support their claims that a group of people caused a social or economic problem. (24) Such "evidence" includes pseudo-scientific intellectual concepts and historical revisionism, (25) as well as links to credible academic articles that are generally taken out of context--concerning real social issues such as criminal justice and homeland security. (26) These sites also deliberately mislead viewers with their titles and content, claiming to provide new and interesting interpretations of generally accepted academic work. (27) To aid in their disguise, more sophisticated language concerning social and academic issues have replaced older trademarks, e.g. swastikas and burning crosses, that have historically denoted a supremacist group or website. (28)
Furthermore, as the Internet has revolutionized media violence by allowing widespread accessibility, hate site recruitment efforts have capitalized on this exposure of cruel and violent content to children. (29) Accordingly, hate sites have developed shocking ways to accomplish their purpose of recruiting both children and adults by including graphic images of human beings that have been killed and mutilated. (30) Violent racist video games are also made available for website visitors. One such game, entitled "Border Patrol," places players at a racist illustration of the Mexico-United States border, and prompts them to prevent Mexican emigrants from entering the United States by shooting them. (31) The stereotypical depictions of each of the three human "targets" are just as disturbing as the gory result of the players' successful shootings. (32) The three types of Mexican immigrants sprinting across the border and following an arrow to the "welfare office" are called "drug smugglers," "Mexican nationalists," and pregnant "breeders" with children. (33) This game, and other similar games, (34) may provide impressionable players with a sense that such activity is socially acceptable, especially when they live in an area or a certain lifestyle that will prevent them from learning an opposing viewpoint.
2. Advocating Violence
Second, these sites do not stop at mere recruitment. Instead, they also seek action from viewers by specifically advocating violence and providing tools for individuals to carry out this violence. (35) Due to the rise in legal scrutiny regarding Internet hate and cyber-bullying, hate site administrators have begun to minimize the outward threat of violence to avoid potential repercussions from legal authorities. (36) As these threats have historically been a staple of a hate group's speech, hate sites have been forced to use more clandestine methods of expressing the same messages. (37) Thus, these sites sometimes intend to disclaim their connection to violence, and instead advocate it in more discrete ways that make it difficult to identify on a specific hate site. (38) The site administrators censor the sites' images by including language that seems to minimize the group's desire for violence or criminal activity directed at out-groups, but then provides links to other websites that not only describe potential criminal activity, but also show photos and details of past activity. (39)
Even when not specifically advocating acts of violence, these sites still attempt to incite action as a byproduct by providing viewers with tools to conduct criminal activity independently in support of the extremist agendas that their content engenders. (40) The Simon Wiesenthal Center for Tolerance recently reported an alarming trend in Internet hate sites; while these sites are used primarily as a tool to recruit new members, they also now serve as a potent vehicle to facilitate "lone wolf' terrorists in completing hate-driven violence. (41) When a hate group decides that the pressure of law enforcement is too high for the safe communication of violent messages, the Internet provides an extremely valuable tool for them to achieve a similar result without accountability. (42) To do so, the group dissuades followers from "joining" by preventing its members from outwardly banding together or meeting, and instead points its followers to websites that teach them how to plot like terrorists. (43)
Although the precise number of hate sites is difficult to quantify, there has been a dramatic increase over the years. (44) At the Internet's early stages in 1995, approximately fifty hate groups used electronic bulletin boards to spread their messages. (45) By 1999, however, at least 800 hate websites targeted "religious groups, visible minorities, women and homosexuals," (46) and perhaps as many as 2,000. (47) Furthermore, the Simon Wiesenthal Center for Tolerance's Task Force Against Terrorism and Hate reported that while over 5,000 "problematic" websites existed that aided terrorism and promoted racial discrimination and violence in 2005, (48) that figure jumped to approximately 10,000 by 2009. (49) According to the Wiesenthal Center's 2010 Digital Terrorism and Hate Report, the amount of hate sites found internationally now stands at over 11,500, which is a 20% increase from 2009. (50)
Concurrently, there has been a staggering increase in the number of hate groups (51) operating in the United States. (52) According to the leading researcher in this statistical area, the Southern Poverty Law Center, that number has grown from 602 in 2000 to 926 in 2009, an increase of about 54%. (53) In 2010, this number rose to almost 1,000, (54) including a separate influx of over 136 anti-immigrant vigilante groups--an increase of over 80%--that are best characterized as "nativist extremists." (55) According to research conducted by the Anti-Defamation League, hate-motivated Internet activity has also recently risen dramatically. (56) Complaints against groups that post messages on social networking sites calling for violence and hatred of Jewish, African-American, disabled, and homosexual citizens increased 200% in 2009. (57)
The Department of Homeland Security has demonstrated that these increases have resulted primarily from public reactions to several emerging issues, including the election of the first African-American President of the United States, the current economic crisis, and the enduring polarizing issue of illegal immigration. (58) Right-wing extremist groups have historically anticipated economic collapses in the United States, (59) and have engendered public fear to increase membership. (60) Just as they had in the 1990's, these groups are currently capitalizing off of social distress by emphasizing the consequences of both the present economic downturn--including real estate foreclosures, unemployment, and the credit crisis--and of social issues such as the effects of illegal immigration. (61) While the 1990's influx of hate groups gradually waned, (62) the current move towards right-wing extremism is likely to grow in strength while the fear factors persist. (63)
Further troubling is the potential for an influx of actual hate crimes and violence: evidence shows that hate speech on the Internet begets action. (64) This evidence, along with a distinct increase in hate groups and websites, indicates a likely increase in violence. After studying data and specific cases of hate-motivated violence over the past decade, the Chair of the Anti-Defamation League's Internet Task Force, Mr. Christopher Wolf, stated: "[t]he evidence is clear that hate online inspires hate crimes." (65) Wolf explains that the Internet strengthens right-wing extremists by allowing recruits anonymous access to propaganda, as well as anonymous coordination of their activities. (66) This coordination is better left unattained, especially when the Internet has also provided extremists with greater access to information facilitating weapons training and tactics, as well as bomb-making. (67)
Although it is difficult to quantify a statistical correlation between these hate sites and the commission of bias-motivated crimes, there are experienced opinions to support the link. Even though the Federal Bureau of Investigation's Uniform Crime Reports from 1995 to 2008 show that the amount of hate crimes reported to state and federal authorities has maintained relatively stagnant, (68) these statistics are not entirely reliable in the Internet context. The FBI's reports only account for crimes against persons or property, (69) such as vandalism and assault, and do not clearly account for crimes that are typical to cyberspace, such as defamation or cyber-bullying and harassment. This pivot point also flows with the government's recent statement that the actual number of hate crimes currently being committed could be as much as ten times higher because many go unreported. (70) Specifically, Internet crimes are more likely to go unreported than crimes with physical consequences and damages, for the incentive to seek restitution is greater. Accordingly, multiple experienced sources have noted that hate crimes have actually recently risen dramatically. For example, the Assistant Attorney General for the U.S. Department of Justice Civil Rights Division, Tom Perez, has noted that the data and his own experience indicate that hate crimes against every group are sharply on the rise. (71)
In sum, hate websites are rapidly increasing by the day, recruitment to and membership in hate groups is at an all-time high, and the current economic and social issues facing the United States give extremists an incentive to use technology to advance hate and violence instead of tolerance. (72) Instead of remaining inactive on the basis of the academic First Amendment principles discussed immediately below, affirmative steps must be taken before the likely conclusions of these statistics begin to become reality. Law enforcement and legislative forces will be called into action (73) after violence begins to increase, and a proactive approach to this issue would be appropriate.
B. Introduction to United States' Approach to Hate Speech: Advancing the Marketplace of Ideas by Prohibiting Content-Based Restrictions
To be clear, this note does not intend to minimize that hate sites are a form of their administrators' expression, and that any steps to restrict them must be sent through the gauntlet of First Amendment scrutiny. Even though these sites provide the increasing number of hate and extremists groups a platform to incite violence against certain people, their expressive content alone cannot be the object of regulation without sufficient justification. Examining the appropriate level of protection owed to Internet hate sites therefore requires a discussion of generally applicable First Amendment principles, as well as specific seminal Supreme Court cases involving hate speech.
The First Amendment's guarantee of free speech and expression is a staunchly defended aspect of personal liberty. Such expression is invaluable to a citizen's right to democratic self-governance, for one must be able to discuss and criticize his or her elected representatives in order to be able to decide public issues and replace those that are not acting according to the wishes of the People. (74) As a result, political speech has historically been given the utmost protection. (75)
To do so, the First Amendment aims to establish a "market place of ideas" (76) in which the messages of all speakers are protected so that, in the aggregate, the "truth" is established as a byproduct of multiple competing viewpoints. (77) The remedy for speech in this marketplace is not regulation and restriction, but more speech made in contradiction. (78) This theory posits that truth is more likely to emerge from ideas being compared and contrasted with each other, for even falsity contributes to society by clarifying the perception of truth through juxtaposition. (79) In addition, without this marketplace a government may suppress speech that does indeed have truth content, depriving citizens of its benefit in advancing functional society.
Expression is protected most staunchly through the Supreme Court's central tenet that content-based restrictions within this marketplace are presumptively invalid. (80) Since the primary protection provided by the First Amendment is to prevent the government from attempting to control its citizens by silencing specific viewpoints or the uttering of specific messages, restrictions based on a particular message must accordingly meet the strictest judicial scrutiny. (81)
In a famous and illuminating statement of the First Amendment's function in light of abusive speech, Justice Brandeis described the importance of realization:
Those who won our independence believed that ... the greatest menace to freedom is an inert people ... [and] that public discussion is a political duty.... They recognized the risks to which all human institutions are subject. But they knew that order cannot be secured merely through fear of punishment for its infraction; that it is hazardous to discourage thought, hope, and imagination; that fear breeds repression; that repression breeds hate; that hate menaces stable government; that the path of safety lies in the opportunity to discuss freely supposed grievances and proposed remedies; and that the fitting remedy for evil counsels is good ones. (82)
Justice Brandeis's statement--as well as much of constitutional scholarship--on the relationship between repression and hate focuses on the academic interplay between citizens and government: citizens must be able to speak so that they do not begin to resent or hate their government. As Justice Brandeis emphasized, there is the inherent prospect of government-led suppression of its citizens' expression and thought, and of a subsequent hatred of this government by those repressed citizens. (83) However, there is an inevitable sliding scale between an apathetic citizenry and an overly zealous one. It is fallacy to maintain that inertia is more damaging than riot. One must account for the "risks" that Brandeis alludes to, namely a First Amendment jurisprudence that allows misguided clamor at the expense of the safety and well being of certain citizens. (84)
Accordingly, a society's willingness to tolerate hate speech too leniently invokes a second type of repression within society, namely the inherent prospect of citizen-led repression of each other. When certain citizens attain means and stature that are sufficiently higher than others, the practical ability--although not condoned by any particular government body--to repress those of inferior means becomes apparent. Within a supposed marketplace of ideas, methods of expression and communication can be used by groups-with-access to admonish groups-without-access. Such expression may eventually become sufficiently commonplace to seem condoned by the majority. As a result, while citizens may grow to hate a repressive government, they may also grow to hate each other. Hate speech pushes the limits of this marketplace of ideas. Although the value of robust political debate engendered by a marketplace free of content-based government restrictions cannot be overvalued, there are several problems with an unbridled marketplace that must be addressed in the hate site context.
First, although the marketplace doctrine seems to imply that there is some objective, metaphysical truth to the notion that opinions have a place in the governing principles of American society, practical experience seems to imply otherwise. Within a marketplace, there is no arbiter of these conflicts, besides the eventual dissolution of unpopular views that lose out to more popular ones. In this sense, the truth is decided by prevalence instead of virtue, and by volume instead of merit. Under this approach, certain impending voices from the ability to become prevalent creates a self-fulfilling prophecy that will not prevail as truth. Thus, for any meaningful decision to be made about what speech is acceptable, the playing field must be even in this market. This is oftentimes not the case since citizens have largely varying degrees of access to potent methods of communication, such as the Internet. Theoretically, the affluent may use the Internet to repress the poor by admonishing their viewpoints without contradictory discussion. Just as Professor Tribe explained that a "proletarian dictatorship" is not an equal ground for viewpoint sharing, (85) a true marketplace does not result from savvy Internet users speaking out against groups with insufficient practical and financial means to access the Internet in a meaningful way. Hate sites often target such groups, while many of these groups' leaders are not afforded the consistent Internet access necessary to speak out in contradiction.
Second, even though a sentiment may not eventually pass the marketplace's scrutiny, its consistent consideration can be damaging by making a hate-driven movement gain mainstream acceptability. Accordingly, Professor Tsesis points out that racist speech, although tolerated, "is not an innocuous part of political discourse." (86) Racist sentiments do not exist in a social vacuum merely because they do not pose an imminent threat of harm, and it is false to assume that advocacy of future violence does not have realistic effects on a movement's chance for future success. (87) Allowing racist comments to go unabashed, and to occupy an equal footing with political discourse, can eventually indoctrinate people to accept their message, for hate is oftentimes gradually developed through propaganda. (88) Thus, the presence of hate speech in the marketplace is more accurately viewed on a longer practical timeline. Even when hate speech does not lead to violence in a specific instance, it may eventually do so. Systematic oppression has historically resulted through earlier advocacy, preparation, and desensitization. (89) Hate sites toe this line, for their continual advocating of violence in the marketplace without government-sanctioned backlash impresses their acceptability.
Third, a marketplace containing viewpoints that advocate hatred and violence toward a group of people provide a fertile breeding ground for impressionable people, such as children, to develop their consciousness under the tutelage of extremist zealots who may distort information. Even for citizens bent on hate, the liberty of free expression is considered fundamental because its absence would severely undermine, and perhaps even eliminate, a citizenry from "self-realization." (90) Further, there is arguably no more important path to equality than providing an equal opportunity to have one's voice heard. On the contrary, a citizen's ability to "realize" his or her beliefs is subject to the limits provided by the inalienable freedoms of other citizens. Additionally, people are not able to say whatever they please when their words significantly affect the health of others. The Supreme Court recognized these limitations, providing less than complete protection for "fighting words," (91) defamatory speech, (92) and sexually explicit speech. (93) Yet, certain realizations, such as Benjamin Nathaniel Smith's, (94) are inherently dangerous. As discussed below in Part II.B, this note examines the extent to which the First Amendment would allow regulation of hate sites to prevent their pervasive messages from fostering dangerous realizations.
1. First Amendment Protection of Hate Speech & Two Important Exceptions to Prohibition of Content-Based Restrictions
Hatred is commonly associated with citizen-led repression, as well as a byproduct of unregulated and tolerated racist speech. According to the above statistics, (95) many take to expressing this hatred through confrontations, demonstrations, literature, and websites. To prevent repressive expressions of hate against a citizenry unable to respond in a meaningful way one must account for an overly lenient First Amendment jurisprudence. The discussion begins with the seminal cases on hate speech, R.A.V. v. City of St. Paul (96) and Virginia v. Black, (97) as well as a look at Beauharnais v. Illinois. (98) These cases outline the development of the Supreme Court's answer to the question of whether or not hate speech is provided strict First Amendment protection. While the lasting effect of Beauharnais' admonishment of libelous utterances directed at a group is questionable, R.A.V. has definitively narrowed the holding by explicitly prohibiting content-based restriction on this type of speech.
In its earliest case on the issue, Beauharnais, the Supreme Court provided its most forceful statement on hate speech. Giving credence to the argument that libel--a type of speech not protected by the First Amendment--can be restricted vis-a-vis individuals and "denned groups," (99) the Court upheld a state provision prohibiting publications that placed a particular race, color, or religion under derision by describing them as depraved, criminal, unchaste, or lacking in virtue. (100) The Court seemingly accepted group libel--expression that is closely analogous to hate speech--as unprotected speech because it severely lacks social value in the marketplace of ideas when weighed against the countervailing interest in morality. "Resort to epithets or personal abuse is not in any proper sense communication of information or opinion safeguarded by the Constitution." (101)
However, even though the Court in Beaucharnais willingly judged hate speech as relatively valueless, and not protected by the First Amendment, the Court has generally declined to support the states' police power to punish and prevent group libel in later cases. (102) While certain Supreme Court Justices, and commentators, have argued that group libel, such as hate literature, does not deserve First Amendment protection because it is not the "civic speech of Reason," (103) this view is of the overwhelming minority. Instead, the Court now views hate speech entirely under the "fighting words" doctrine, thus extinguishing some of the expansive Beauharnais holding's fire by specifically enforcing a blanket prohibition on content-based restrictions on hate speech. (104)
Nevertheless, the decisions in R.A.V. and Virginia v. Black seemingly permit statutes that punish hate speech, in the form of fighting words, even if directed at a group as opposed to an individual. In R.A.V., the Supreme Court held a local ordinance unconstitutional that banned symbolic conduct--such as cross burning or displaying a swastika--when it would arouse anger or resentment on the "basis of race, color, creed, religion, or gender." (105) However, even by striking down this statute as an impermissibly content-based regulation, the Court did not create an impenetrable precedent for future hate speech statutes.
To clarify this assessment, the R.A.V. opinion is comprised of two analytic components: the validity of the state's interest in creating the ordinance, and whether or not the ordinance's means are narrowly tailored--neither overbroad nor under-inclusive--to satisfy the state's interest. Importantly, the majority opinion does not hold that regulating hate speech for the purpose of promoting violence or fear--even if done through the display of a symbol--is an insufficiently compelling state interest to pass First Amendment scrutiny. Instead, the Court struck down this statute because it was drawn too narrowly to satisfy the interest. While a municipality is permitted to prevent the provoking of anger or resentment on the basis of race, color, religion, or gender, it cannot prevent it on only those bases. (106) The statute must have also proscribed causing anger on the basis of other subjects, such as "political affiliation, union membership, or homosexuality." (107) Importantly, the R.A.V. statute was only held to be content-based discrimination because the "fighting words"--in this case, the displaying of the symbol--proscription targeted specific messages, not because hate speech is a message that cannot be regulated for its content.
As a result, R.A.V. does not stand for the proposition that regulation of hate speech is futile. Although the Court struck down a specific hate speech ordinance, it ironically reinforces the validity of regulating hate speech in the process. The Court not only retained room to uphold future hate speech statutes, but also provided guidance on how to draw such statutes, providing two exceptions--outlined in the previous and following paragraphs--to the fundamental ban on content-based restrictions. (108)
First, a content-based restriction is justified if motivated by a permissible content-neutral purpose, namely the "secondary effects" of the speech. (109) This position tracks the Supreme Court's prior stance in City of Renton v. Playtime Theaters, where a city ordinance restricting sexually-explicit speech--prohibiting pornographic theaters--was upheld on grounds that it was not aimed at silencing the content of this speech, but the secondary violent effects that resulted in the neighborhood. (110) Whether this position allows regulation of hate sites on account of a potential link to an increase in hate crimes is examined in Part II of this note.
Second, content-based restrictions within unprotected types of speech are permitted if the restriction is aimed at remedying the specific reason that the type of speech is unprotected. (111) Incidentally, Virginia v. Black provided an example of when hate speech rose to the level of fighting words, and could be regulated by a content-based restriction. (112) Specifically, when hate speech rises to the level of a true threat by communicating a serious intent to commit an unlawful act of violence against the listener, it is worthy of lesser First Amendment protection. (113) Importantly, the "speaker need not actually intend to carry out the threat," (114) because the listener is the true interest that warrants protection in the hate speech context. Indeed, a prohibition on true threats "protect[s] individuals from the fear of violence" and "from the disruption that fear engenders," in addition to protecting people "from the possibility that the threatened violence will occur." (115) Since cross-burning has such a history of virulent intimidation in the United States, the Supreme Court held that it was a "true threat" even if the actor did not intend to harm the viewer. (116) This type of violence and intimidation is the real reason that fighting words are provided less First Amendment protection. Thus, a content-based restriction banning cross-burning, but not all kinds of fighting words, was permissible. (117)
Black is ripe for application to Internet hate sites, for it held that an expression of hate speech can be regulated as fighting words even if the speaker does not intend to carry out the threat of violence that the expression implies. Thus, the government can prohibit the Internet expression of virulent and abhorrent statements advocating violence towards groups--based on their immutable traits or religion--even if the speaker does not intend the actualization of specific acts of violence. Nevertheless, since Black stands most pertinently for the proposition that an expression of hate must be historically recognized as prevalent and virulent, it would only support hate site regulation that prohibited such expressions without prohibiting the entire site. While this would be useful in the aggregate, the first exception is explored in Part II of this note.
In addition, outside of the hate speech context, the Supreme Court also recognized the harm caused by indecent speech and therefore limiting its ability to reach impressionable children. (118) To do so, the Supreme Court did not state that indecent speech cannot be uttered, but instead chose to regulate how it can be disseminated through certain media of expression. Part II.A of this note examines whether this medium-specific approach can be used to regulate Internet speech by limiting hate sites vis-a-vis children.
C. Internet Hate Speech in the United States: First Amendment & Federal Statutes
Few people can help their immediate condemnation when confronted with virulent expressions of hate on the Internet: "There ought to be a law [against it]." (119) The overwhelming force of United States legal scholarship, however, chastises the thought of controlling Internet speech for fear of "chilling" its valuable social contributions. (120) According to the marketplace of ideas approach, the most appropriate remedy for Internet hate speech is quelling it by expressing contradictory sentiments. (121) Unfortunately, such debate is oftentimes a drawn out process with merely academic results, for the marketplace does not always permit equal participation by the hated.
Due to the sharp increases in hate sites and complaints of Internet hate and abuse, an examination aimed at preventing resultant and future harms of such expression is necessary. Methods of doing so, however, must surpass the Supreme Court's longstanding First Amendment's protections. (122) Since the Supreme Court has not addressed Internet hate speech, this examination must appeal to the general First Amendment jurisprudence from want of specific guidance.
Aside from the aforementioned First Amendment protection given to hate speech, certain United States federal statutory provisions provide even more practical protection for hate sites. Federal law has affirmatively granted Internet Service Providers (ISPs) general immunity from liability for the content of websites that they "host," (123) and many ISPs subsequently host websites regardless of their content. (124)
Consequently, since 1995 an estimated 70% to 90% of hate websites are hosted by ISPs in the United States. (125) While there were approximately 160 active hate websites in 1995, this number grew worldwide by 2006 to approximately over 4,000, of which 2,500 are within the United States. (126) Despite this discrepancy in hate websites, the United States government has been very reluctant to create new laws aimed at Internet hate even though legislatures around the world have heeded the call to do SO. (127)
However, the United States has in fact drawn the line at Internet activity that constitutes a threat to national security or the physical well being of specific persons. (128) However, Congress and the Supreme Court have been conservative in determining what actually constitutes these threats. It would behoove the U.S. legislature and courts to pay homage to European countries' belief that Internet hate speech detracts from their citizens' aggregate welfare and safety, and to act accordingly by fashioning a deterrent for these websites.
1. Juxtaposition Between United States' and International Approach to Hate Speech
International treaties and statutes in other nations advocate and consummate the criminalization of Internet hate speech in sharp contrast with the United States' approach to governing hate speech. Even though the European Union has also established a similar safe haven regime for ISPs, (129) there is no similar concurrence between their views on liability of individual speakers of hate. Since the international community has specifically accepted that hate speech perpetuates racism (130) despite citizens' theoretical opportunity to speak out against it, many western countries have enacted legislation to restrict hate speech in order to protect its citizens' rights and dignities, as well as their ability to participate equally in government. (131)
The development of international treaties is informative of this difference. (132) First, the U.N. Convention on the Elimination of Racial Discrimination, opened in 1966, obligated party countries to prohibit general hate speech propaganda. (133) The Council of Europe later stated in 2002 that since racism is a crime, not an opinion, (134) member states must go beyond fighting racism to specifically restricting the "dissemination of hate speech against certain nationalities, religions and social group...." (135) Accordingly, during the Convention on Cybercrime in 2003, the first international treaty on Internet criminal offenses, (136) the Committee of Experts on the Criminalization of Racist or Xenophobic Acts Using Computer Networks added a Protocol that advocated the criminalization of Internet hate speech. (137)
In 2004, the Organization for Security and Cooperation in Europe (OSCE) finally honed its attention on Internet hate speech, organizing the first conference aimed specifically at addressing it. (138) This conference, the OSCE Meeting on the Relationship Between Racist, Xenophobic, and Anti-Semitic Propaganda on the Internet and Hate Crimes, highlights the juxtaposition between American and European views, with American law professor Ronald Rychlak presenting the United States' position. (139) After being prodded to cease displaying "typical American arrogance" by hiding behind the First Amendment, (140) Mr. Rychlak explained the American idea that censorship is more offensive than offensive speech, and that the United States instead draws the line when expression presents a clear and present danger by constituting harassment, threats, or incitements to imminent lawlessness. (141) In contrast, European states seem to believe that hate speech websites cross this line per se. In other words, imminence must be viewed through a wider lens in order to understand that hate speech aggregately poses a clear and present danger.
The effect of the United States' stance is to preclude a viable international legal solution by providing a safe haven for the majority of hate speech sites to be established and operated. (142) Thus, despite international efforts to criminalize these websites, (143) foreign citizens can easily create and operate such sites on American servers, making enforcement of these statutes impracticable. (144) This problem was the topic of the most recent international meeting on Internet hate speech, the Global Summit on Internet Hate Speech, held in Washington, DC in November of 2008. (145) The international community has seemingly begun to begrudgingly accept that the United States will not enact a general statute criminalizing hate speech websites, and is searching for new answers. (146)
While the United States cannot be expected to completely outlaw hate speech, it should rethink its stance in a manner that allows it and other countries to effectively limit the harmful effects that these websites have on the international community. The following two-pronged analysis provides two practical and potentially effective methods of regulating hate speech websites that could deter not only existing administrators from continuing their efforts, but also potential ones from creating them.
III. RETHINKING THE UNITED STATES' APPROACH TO ANONYMOUS INTERNET HATE SPEECH
There comes a point where statistical evidence and an influx of complaints must lead United States' jurisprudence to reconsider the level of constitutional protection given to Internet speech. That time is now. The following two lines of reasoning carry with them two potential corresponding remedies.
A. Revisiting Reno v. ACLU: Internet as an "Invasive" Medium
Although restrictions on expressions of opinions, including hate speech, must generally meet the highest and strictest level of judicial scrutiny, (147) not all media of speech are as staunchly protected, on account of the unique problems they may present. Specifically, the Supreme Court held in FCC v. Pacifica Foundation that indecent speech finds itself protected only by minimal judicial review under certain circumstances. (148) The Court expressed that not all media of speech are protected equally because they each present unique problems. (149) Specifically, the government has been justified in regulating broadcast media because of the history of extensive government regulation of the medium, (150) and its particularly "invasive" nature. (151) The history of government regulation of the medium in question is relevant because it informs the public of whether or not speech that it presents is approved by the government and society. (152) In addition, the "invasive" nature of the medium is relevant because it informs the Court of how likely a child is to be accidentally confronted by indecent speech on it. (153)
Accordingly, the Supreme Court has specifically held that the FCC can regulate radio broadcasts for profanity because children may hear it unwittingly. (154) However, under this medium-specific First Amendment analysis, the Supreme Court held that commercial telephone communications are not invasive enough to receive lesser protection. (155) Pertinently to this note, the Supreme Court has also refused, in Reno II, to analyze speech made on the Internet in the same manner as radio broadcasts. (156)
In this 1997 case, the Court distinguished the Internet on two grounds: the Internet is not an "invasive" medium, and there is insufficient government regulation of the Internet for someone to infer an "official or societal approval" of the message. (157) Both of these premises warrant reconsideration based on the Internet's development as a medium since the case's factual findings were made 14 years ago. Since that time, dramatic changes have occurred on the Internet that placed children in danger of viewing hate speech accidentally. In addition, increasingly heavy government regulation of the Internet has made it more reasonable for children and their parents to believe that what is presented on it is true and socially approved. Revisiting Reno II in order to adopt an "invasive medium" analysis in future cases is warranted on account of these two-fold developments.
After establishing First Amendment grounds for narrowly tailored regulation of Internet speech to protect accidental harm to child users, a specific remedy should be advanced. The influx of hate websites has created a high likelihood that a child user will view them without prior awareness of their objectionable and indecent content. A suitable remedy is to require ISPs to denote hate websites with profane or otherwise indecent material in such a way that children will know their likely content before viewing them. These ISPs should not be strictly liable for failure to do so, but there should be a high standard for what efforts are reasonable in order to stop minors from browsing these sites.
1. The Internet is an Invasive Medium
When a child needs to do a report on Jewish history or on the American Civil Rights Movement, the Internet provides a valuable tool for gathering information. However, if one enters the word "Jew" into Google's search engine, the second link shown (158)--"www.jewwatch.com"--is a virulent anti-Semitic website driven by the hatred of Jewish people. (159) When searching for Martin Luther King, the third site shown (160) is called "martinlutherking.org," a website that exclusively contains racist speech and encourages students to print fliers to promote hatred of African-Americans. Considering how prolific the Internet is as an educational tool, being confronted with seemingly innocuous websites that are hate-filled and borderline indecent is a situation in need of a remedy.
While the Internet serves children by improving research access, socialization, and communication with family, it also engenders negative influences such as hate websites, pornography, and violence. (161) Specifically, hate groups have turned to the Internet as a recruitment tool because prior methods, including distributing pamphlets on school grounds and neighborhood mailboxes, were unsuccessful since teachers and parents could easily intervene. (162) The Internet has proven more useful because disaffected young people spend more time browsing than their parents do, and hate groups do not face the same intervention problems as with print material. (163) Instead, many children consider the virtual world their "home away from home," and are able to view hateful content without impediments. (164) Appropriately, the resulting ease with which children can access prolifically negative Internet content is regarded in some quarters as one of the most serious impediments to the social development of children. (165)
Although precise statistics of how many children are harmed in such ways would be difficult to attain, statistics of Internet use by children are readily available. Generally speaking, the most recent U.S. Census statistics, from 2007, show that 56.3% of children access the Internet from inside the home or at school. (166) To hone these figures for this discussion, since a child may view objectionable material while using the Internet either at home or at school, it is important to include the rate of computer and Internet use by students--from first to twelfth grade--at both locations. First, the percentage of children using the Internet in their homes steadily increased from 1998 to 2003. For clear emphasis, while 52% of children ages 6 to 17 used the Internet in their home in 2003, (167) only 36% had done so in 2000, (168) and only 19% percent of children 3 to 17 used the Internet at home in 1998. (169) More specifically, 20% of children had a computer located in their bedroom and 54% of those computers accessed the Internet. (170) Furthermore, according to a 2008 study, 71.1% of children ages 6 to 11 accessed the Internet during the month prior to the study and 83.4% had done so at home. (171)
Second, the percentage of student Internet use at school rose dramatically from 1995 to 2005. In 1995, at the time the data was collected for Reno II, 50% of public schools had Internet access for students. (172) That figure had risen to 78% by 1997, to 95% by 1999, and to 99% by 2001. (173) At these schools, the percentage of instructional rooms with Internet access rose dramatically as well, showing that the increase was more than merely the result of schools having a few token computers, but of truly increasing the aggregate Internet use by children in each classroom. At the time of Reno II, in 1995, the figure for instructional use was 8%. That figure skyrocketed to 62% by 1999, 77% in 2000, 85% in 2001, 93% in 2003, and to 97% in 2005. (174)
Although the number of schools and computers equipped with the Internet is important, the ratio of public school students to instructional computers with access to the Internet is the statistic that hones this development most closely. This statistic was first kept in 1999, after the amount of classrooms had risen eightfold since the time Reno II's data was collected. Thus, it is fair to assume that this ratio was much larger in 1995. This ratio was 12:1 in 1999, dropping to 9:1 in 2000, to 5:4 in 2002; and to 3:8 in 2005. (175)
Children engage in various activities on the Internet, including communication, locating information, entertainment, and doing homework. (176) According to the most recent available statistics, approximately 47% of students use the computers at home to complete their homework. (177) More striking, however, are the percentages of children who connected to the Internet at home: 34% of grades 1-5 do so; 54% of grades 6-8; and, 64% of grades 9-12. (178) The level of supervision that they receive, as well as their ability to browse efficiently without the risk of viewing indecent content, are two pieces of information that are relatively incalculable.
2. Analyzing these trends under FCC v. Pacifica and Reno v. ACLU
Under Pacifica and Reno II, the operative factors used to evaluate whether the medium is invasive are: (1) the likelihood that indecent content placed in the medium could be accidentally and inadvertently displayed to a user and (2) the level of activity and intent that the user would need to use in order to be inadvertently confronted with this material. (179) Although in 1997 the Court used these two factors as a basis for its rejection of contention that the Internet is an invasive medium in Reno II, these criteria should now be used to draw the opposite conclusion.
The Supreme Court in Pacifica applied minimal scrutiny to a regulation on broadcast speech because:
[T]he broadcast audience is constantly tuning in and out, prior warnings cannot completely protect the listener or viewer from unexpected program content. To say that one may avoid further offense by turning off the radio when he hears indecent language is like saying that the remedy for an assault is to run away after the first blow. (180)
In rejecting the government's attempt to apply this reasoning to characterize the Internet as invasive, the Court in Reno H utilized a tentative manner of thinking regarding the reality of Internet use. (181) Furthermore, the District Court's findings of fact, (182) upon which the Supreme Court closely relied, were made during the time period when Internet use by children was markedly different from what it is today.
The Court stated that the Internet should not be considered the invasive medium and should subsequently be afforded strict scrutiny review, because it is the "most participatory form of mass speech yet developed," (183) and it "is not as invasive as" broadcast media. (184) While a television is easily accessed by children, Internet material requires "a few clicks" that the District Court found to be of the utmost legal significance. (185)
In Pacifica, the Court determined that because the listeners normally tune in and out without knowing what is being broadcast before tuning in, the radio broadcast is invasive. According to the Court in Reno II, however, even though the Internet makes information so widely available, users are not at a significant risk of viewing indecent material by accident because:
A document's title or a description of the document will usually appear before the document itself ... and in many cases the user will receive detailed information about a site's content before he or she need take the step to access the document...." Unlike communications received by radio or television, "the receipt of information on the Internet requires a series of affirmative steps more deliberate and directed than merely turning a dial. A child requires some sophistication and some ability to read to retrieve material and thereby to use the Internet unattended. (186)
The process of searching the Internet and viewing its content has changed significantly from the above characterization, and should be considered just as uncertain and invasive as radio broadcasts.
First, it requires fewer affirmative steps by children. The Internet has become such a mainstay in childhood education that the "sophistication" the Court believed was required is no longer a prerequisite for Internet access. As a child learns how to use a computer and the Internet, the act of turning on the computer and opening a web browser becomes second nature, and almost habitual. Even though a child needs some sophistication to learn how to use the Internet, he or she no longer needs this sophistication after this learning process is completed. Thus, although these web-access steps are technically "affirmative," they are no longer as significant as the Court in Reno II determined. American society has stressed the Internet as an educational tool. It is easy for a child to access it, particularly when families leave the computer turned on and connected to the Internet during the day.
Thus, to determine whether the Internet is an "invasive" medium, one should start with a child sitting with his or her hand on the mouse and keyboard, having just viewed the search results in the Internet search engine browser. For example, a child has already typed the word "dog" into Google and has obtained the applicable search results. From this point, the step of "clicking on a website" is sufficiently analogous to turning on a radio. Both media are stationary at that point: the Internet search engine shows results that could be selected by clicking, and the radio is ready to be turned on with the station to be broadcasted displayed on the dial.
There is a difference, however, in how much information is available about the potential website and the potential radio station. While Google's search results show the title and description of the website, a radio does not provide any information concerning the potential stations, besides a radio frequency. Nevertheless, this difference is insufficient to treat two media differently. Tactics used by technologically-savvy Internet users, such as "spamdexing" (187) and "Google Bombing," (188) have succeeded in misleading users concerning the actual content. Thus, the information provided on the Google search is not completely reliable.
Moreover, the level of activity required to accidentally find objectionable or harmful material, such as hate speech websites, is just as significant as that required to accidentally stumble across indecent radio broadcasts. Clicking on each website that seems applicable is highly analogous to continuing to turn the radio dial until a desired station is reached. For these reasons, the statistics showing the prevalence of Internet use in the lives of children provides the justification for treating the Internet as an invasive medium that should be regulated by narrowly tailored means aimed specifically at minimizing these dangers.
Second, the likelihood of a child being accidentally confronted by indecent content on the Internet has increased dramatically since the Court's Reno II decision in 1997: the content itself has become more accessible and deceptive, and the content-filters that the Reno H Court believed would gradually minimize this possibility have proven to be impracticable or ineffective for this purpose. (189)
Today, when a child types a word or phrase into a search engine, such as Google or Yahoo!, the relevant websites and documents are shown with a short description. However, these descriptions are not as "detailed" (190) as the Court believed: the description provided by a search engine concerns the information applicable to the word searched for, but does not necessarily concern the website or document's entire content. Thus, when the child types the word "Jew" into google.com, the search results normally do not explain why the website or document is relevant to that term. There is no abstract or blurb concerning the overall content of the website or document. Instead, there is merely an excerpt of quoted language that includes the bolding or highlighting of the search term. What the remainder of the content entails is left to inference. This is problematic, because children may not be aware that hate websites exist so extensively on the Internet, and subsequently may not be able to infer the content from the search results.
Furthermore, the likelihood of finding an inappropriate website has also increased because the engines used by children to access the Internet have also changed dramatically since the Reno II decision. The findings of fact in Reno H were made at the times when children used the Internet in more guarded ways than they do now. Specifically, children used to access the World Wide Web through programs such as America Online, Microsoft Network, and Prodigy, which were all capable of providing child-filters for the websites delivered through their services. (191) These programs also required that their users sign in under screen-names, readily allowing parents to control the access and settings of their children's Internet use. Today, the children turn on the computer and go straight to Internet Explorer, or a similar web browser, without having to sign in under their account, and thus escape parental control.
3. Preventing Children's Accidental Viewing of Indecent Internet Material: Creating New Generic Top-Level Domain to Aid Content-Filtering Technology
The government is permitted to regulate the Internet as an invasive medium to restrict access to "[p]atently offensive, indecent material," (192) such as hate speech websites advocating profane intolerance and showing graphic depictions of violence when the material does not technically qualify as obscene material that appeals to the prurient interest. Also important is the evidence that since Reno II, technology has permitted advances in "tagging" systems that would permit users to screen certain websites from showing up in their browser. (193)
The Court in Reno II specifically rejected the idea that a "tagging" regime would allow users to block unwanted content, because such functions did not exist at the time, and web browsers were not equipped to screen tagged material at a user's request. (194) However, technological advances in domain registrations and assignments provide grounds to revisit this "tagging" concept. If new generic top-level domains (gTLDs) (195) were extended not only to sexual websites, (196) but also to hate speech websites, and if failure to comply with specific requirements could lead to criminal liability for the administrator, Internet content-control software would be effective in preventing minors from viewing these websites. Even more importantly, this method would not prevent adults from viewing the indecent content. Therefore, the Supreme Court's main concern in Reno II that, while protecting children from indecent Internet material, the content control technology would prevent adults from viewing and sending material according to their own preferences, would be satisfied.
The top level domain of a website is important because it serves to identify the website to the content-control software. (197) The content-control technology's ability to catch websites with certain top-level domains could help bridge the gap between adults who want to see certain content, and children who should not be permitted to do so. For example, the ".xxx" top-level domain creates a zone for adult pornography, intentionally preventing children from entering this marketplace. (198) After the content-control technology catches the website, the user is either forbidden from viewing it--depending on what type of filter was in use--or has to affirmatively choose to enter the website if permitted.
Similarly, in order to make certain virulent hate speech websites subject to content-control software that keeps them out of the view of children, it would be prudent to use ICANN's development of new gTLDs to create an applicable domain for these websites. Although the crux of ICANN's recent movement towards creating and selling gTLDs is motivated primarily by economics and revenue-creation, (199) this technology could be used for other purposes. ICANN, a non-profit organization operating on behalf of the U.S. government, would not require restitution for any government-ordered use of this technology without the standard compensation by website operators.
Any website that advocates violence against or hatred of people based on their race, religion, creed, gender, age, sexual orientation, or other immutable traits, should be labeled with a top-level domain name that indicates the existence of such content in its URL and facilitates filtering. This is not a prior restraint on adult Internet speech, but a type of cyber-zoning endeavor made possible by technological advancements. As an invasive medium, the Internet may be regulated in such a way that prevents children from accidentally being confronted with such indecent content.
The following strict semantic rule that utilizes the Supreme Court's "true threat" holding in Virginia v. Black (200) must be applied to rectify this potential confusion. Websites that contain profanity, including racial or other epithets based on immutable characteristics directed towards a group, and whose expressions would lead a listener within that group to feel threatened, should be considered a hate speech website requiring this new top-level domain.
It is difficult for ICANN to ensure that websites qualifying for this new top-level domain are actually labeled as such. While ICANN would be the authority on making this determination, the organization cannot be expected to search the World Wide Web infallibly for such websites. Thus, there must be a supplementary self-declaration process. In addition to ICANN's efforts to find these indecent websites and notify them that they must change their top-level domain, there should be a penalty for a website administrator's failure to do so. This penalty would not have to be criminal or monetary; it would be more appropriate for the penalty to revolve around the website itself. Thus, until the necessary changes are made, an administrator's failure to follow the procedures for changing the website's top-level domain would result in the website being blocked for both adults and children.
In conclusion, because of the significant developments and changes in Internet use occurring since Reno II, a revisiting of First Amendment protection to Internet Speech is required. Due to the extent to which the Internet has been stressed as an educational and recreational tool for children, it should now be considered an "invasive" medium of communication. Thus, the Supreme Court should apply the same lesser level of First Amendment protection that it provided broadcast radio in F.C.C. v. Pacifica Foundation, allowing for content-based restrictions that prohibit "patently offensive, indecent material." (201) The above proposal of adding a new general top-level domain for hate speech websites that fit the ICANN's protocol for restriction would pass the Pacifica level of scrutiny, and should be adopted to prevent children from being confronted with indecent hate speech websites that could harm their psyches and development.
B. Revisiting R.A.V. for Regulation of "Secondary Effects" of Hate Speech Websites
The primary reasons for rethinking United States' jurisprudence on Internet hate speech are the inherent differences between Internet speech and traditional methods of discourse, as well as the subsequent problems that arise from treating them analogously. (202) Granting Internet speakers the same right of free expression as traditional speakers is the proverbial attempt to fit a square peg into a round hole. According to the statistics and recent trends outlined in the beginning of this note, there are many "secondary effects" of granting unlimited protection to posters of Internet hate speech. (203) Specifically, the influx of hate crimes, both on the Internet and in reality, is one of these effects. Thus, regulation of Internet hate websites is permissible under the First Amendment because of their link to this influx.
The following discussion provides a remedy for these secondary effects: namely, restricting the ability of Internet users to engage in hate speech anonymously on the Internet. Many of the problems occurring on the Internet are exacerbated by the fact that speakers and users are not identifiable by other users: thus, a regulation or judicial endeavor that focuses on Internet anonymity is appropriate. The remedy would specifically be to allow a civil plaintiff to find the identity of someone who publishes hate material on a website without proving individual damages. This could be in the form of a Congressional statute that also provides standing per se, in order to counteract the obvious problem of someone being turned away in court for being unable to articulate specific damages. Thus, Congress could take a stance similar to European nations that believe that hate speech has a harmful effect in the aggregate, (204) and could write a statute engendering more civil action.
1. Internet Anonymity, Its Benefits, and Its Drawbacks
Besides the oft-mentioned differences between Internet communication and traditional forms of discourse, (205) Internet users also have the ability to remain anonymous in their activity. (206) The Internet permits and encourages anonymous speech, which can have harmful results. Anonymous Internet use increases the opportunities for users to engage in criminal activity and hate speech without the typical fear of repercussions. (207) Specifically, while an anonymous Internet speaker cannot be readily identified and contradicted in the same arena in which he or she is speaking, listeners may readily identify anonymous speakers that use other media.
In addition, while traditional forms of speech can be regulated to eradicate potentially threatening content by means other than legal action, anonymous Internet speech can only be controlled through regulation by legislative and judicial endeavors. For example, a newspaper will not publish threatening speech because the free market economy could reduce this paper's sales due to a negative backlash by readers. In addition, speech that threatens an individual in the speaker's direct, physical presence can be self-regulated by the listener's ability to take direct physical action or to invoke a police response against the speaker.
There are ways in which Internet speech cannot be regulated. First, ISP's cannot be held liable for speech that they allow to be posted. (208) Second, an anonymous user communicating through a computer cannot be confronted directly by the listener. The evidence of Internet threatening speech is more difficult to gather than the evidence of traditional threatening speech. Without being able to unmask the Internet speaker, the plaintiff does not have the same ability to gather evidence to make a case as he or she would if the speech was of virtually any other type.
Furthermore, anonymous Internet users are more apt to behave indecently or to advocate hate than anonymous speakers in other media. (209) One scholarly article has identified two reasons for this. First, it releases users' nasty sides and "disinhibits" them through a lack of fear of repercussions. (210) Second, anonymity on the Internet may also have a "toxic" effect on a user because it is frustrating to have to engage in activities one enjoys without an identity. (211)
It's not unlike the ignored child who starts acting 'bad' in order to acquire attention from the parent, even if it's scolding and punishment. The squeakiest wheel. Humans, being humans, will almost always choose a connection to others over no connection at all, even if that connection is a negative one. (212)
Thus, these people act antisocially because they are not known as themselves, and they feel that they must have some sort of effect on other people as a compensation mechanism. (213)
2. Secondary Effects of Hate Speech Websites
Hate websites are steadily increasing, hate group membership has reached an all-time high, and current economic and social issues are giving extremists an incentive to use the Internet to advocate hate and violence instead of tolerance. (214) These developments are worthy of being targeted for regulation regardless of the message that the websites themselves advance, because evidence also shows that "hate online inspires hate crimes." (215) Thus, any regulation of hate speech websites could actually be justified on the grounds that it aims to prevent a further increase in hate crimes, and even to lower the current rate.
The "secondary effects" theory espoused in City of Renton v. Playtime Theaters (216) and approved in R.A.V. v. City of St. Paul (217) provides a method for the government to regulate hate speech: namely, to attack the effects of these websites directly as opposed to targeting the speech itself. The Court in Renton upheld a zoning ordinance that prohibited adult movie theaters--that is, only theaters that showed sexually explicit films (218)--from being located within 1,000 feet of any "residential zone, single or multiple-family dwelling, church, or park, and within one mile of any school." (219) The Supreme Court found that this ordinance was not aimed at restricting the sexual content of the adult movie theaters at issue, but instead at the "secondary effects" that it had on the nearby community. (220)
In effect, the Court created the "secondary effects" rule that a content-based restriction is constitutionally justified as long as it is motivated by a permissible content-neutral purpose. (221) The Supreme Court later applied this rule to similar facts in City of Erie v. PAP's A.M, where a city ordinance prohibited public nudity in order to facilitate the closing of an exotic, nude dancing establishment. (222) The Court held that the purpose of the ordinance was not to suppress the message conveyed by exotic dancing, but the effect that such establishments had on their close surroundings. (223) These effects were considered "secondary," and a justifiable purpose for regulation despite the fact that they were facially aimed at the sexual content of the expression. (224)
Important for potential hate speech regulation is the fact that the Supreme Court has expressly accepted--in R.A.V. v. City of St. Paul--that the secondary effects theory can be applied to hate speech, not merely sexual expression. (225) However, the Court reiterated in R.A. V. that the emotive response of a listener or viewer of a speech is not a "secondary effect." (226) Thus, there must be some linked result that arises from the emotive response.
The government in Reno II advanced a "secondary effects" argument for regulating obscene or indecent material over the Internet, claiming that the "CDA is constitutional because it constitutes a sort of "cyberzoning" on the Internet." (227) The Court rejected this argument, stating that the CDA applied broadly to the entire universe of Internet material in order to protect children from the "primary effects" of "patently offensive" and "indecent" speech. (228) In other words, the CDA regulated indecent Internet content by virtue of its direct effect on children. In contrast, hypothetically speaking, if a demonstrated link were established between children viewing indecent material and a severely heightened risk of immediate suicide, then the government could justify the CDA on grounds that it aims to prevent the "secondary effects" of childhood suicide.
Nevertheless, the Supreme Court's holding is admirable and correct. Moreover, it gives credence to this Note's proposal in Part II.A that the Internet's potentially harmful effects on children warrant providing lesser First Amendment protection as an invasive medium.
The secondary effects of people soliciting membership of hate groups through their websites, and advocating poor and perhaps violent treatment of individuals and groups, are that such speech can and does incite listeners and viewers to commit hate crimes, such as harassment. (229) A restriction on such hate speech and activity would not restrict the content of the speech itself, but would attempt to limit the secondary effects of Internet hate speech. In other words, it would restrict hate crimes and harassment that necessarily occur as a result of these websites.
3. First Amendment Protection of Anonymous Speech, and its Application to the Internet
Any regulation of anonymous hate speech must circumvent existing First Amendment protections. Although American courts have historically given strict protection to a citizen's right to speak anonymously, (230) courts have recently been willing--according to differing standards--to permit the disclosure of an anonymous Internet speaker's identity during civil proceedings. (231)
The First Amendment has long protected the right to express oneself anonymously, (232) for "[a]nonymity is a shield from the tyranny of the majority" by virtue of its ability to protect "unpopular individuals from retaliation ... at the hand of an intolerant society." (233) As a result, controversial speech has historically issued from anonymous writers who feared persecution. (234) The overwhelming majority of these anonymous writers, however, were publishing documents regarding politically sensitive messages in the much more oppressive Colonial time period. Moreover, the seminal cases--Talley and McIntyre--revolve specifically around political speech, which has always been granted the utmost protection. There is a stark contrast between a persecuted minority's ability to publish writings that outline its plight, and an Internet user advocating violence against an entire group of people based on religion or an immutable trait. The latter is not political speech, and, according to some commentators, is not even reasoned speech. (235)
While the First Amendment protects anonymous speech through traditional methods of discourse, the Supreme Court has not levied a decision on whether that right extends to anonymous Internet use. Nevertheless, lower courts have expressed that this issue raises First Amendment concerns, and have generally protected the right to anonymous Internet expression despite creating various standards for unmasking anonymous Internet speakers. (236) To unmask an anonymous Internet speaker for civil proceedings, a court must first choose to validate the allegations enough to warrant the commencement of evidentiary discovery regarding the speaker's identity. While citizens are not permitted to interact anonymously with each other in ways that violate the law, the "ability to speak one's mind without the burden of the other party knowing all the facts about one's identity can foster open communication and robust debate." (237) As such, there must be a heightened evidentiary standard governing plaintiff's requests to unmask Internet speakers in order to ensure that plaintiffs "do not use discovery procedures to ascertain the identities of unknown defendants in order to harass, intimidate or silence critics in the public forum opportunities presented by the Internet." (238)
In response to the complicated First Amendment debate, several approaches have been taken by various courts in what can best be described as a chaotic and unsettled area of law. Specifically, while there is significant case law and scholarly discussion of Internet anonymity in the areas such as defamation and disclosure of corporate trade secrets, there is a considerable dearth of discussion concerning criminal charges and civil litigation stemming from Internet hate speech.
Nevertheless, the New Jersey Superior Court's ruling in Dendrite International v. Doe No. 3--where a pharmaceutical software company moved to compel Yahoo! to divulge the identity of four anonymous Internet posters in a suit for defamation, disclosure of trade secrets, and breach of contract (239)--provides a proper framework of principles to apply to the hate speech realm. In Dendrite, the court developed a five-part test: (240)
(1) the plaintiff must make efforts to notify the anonymous poster and allow a reasonable time for him/her to respond; (2) the plaintiff must identify the exact statements made by the poster; (3) the complaint must set forth a prima facie cause of action; (4) the plaintiff must bring forth sufficient evidence for each element of its claim; and (5) the court must balance the strength of the speaker's claim to First Amendment protection against the strength of the plaintiff's underlying legal claim and the need for disclosure of the speaker's identity. (241)
In conclusion, developing a working jurisprudence to limit the harmful secondary effects of hate speech websites should not focus on the messages themselves, but how they are conveyed. Further, such a framework should not revolve around criminal sanctions, but around civil liability. If hope rested on a criminal statute, the motivation to limit secondary effects would rest on the shoulders of law enforcement officials, who understand the difficulties of locating anonymous Internet users. Instead, ordinary citizens who are harmed by these websites should hold the key.
The remedy should entitle a civil plaintiff to the identity of someone who publishes hate material on a website without proving individual damages. This could be in the form of a Congressional statute that also provides standing per se, in order to counteract the obvious problem of someone being turned away in court for being unable to articulate specific damages. Thus, Congress could take a stance similar to European nations that believe that hate speech has a harmful effect in the aggregate, (242) and could write a statute allowing for easier civil action.
Important is the effect of such a statute on judicial efficiency and the potential for a flood of litigation: thus, this statute would also have to direct these efforts first to an administrative agency--such as the F.C.C.--with the expertise to analyze a claim's potential success, and the power to prosecute it if needed.
(1.) E.g., Phyllis Korkki, Internet, Mobile Phones Named Most Important Inventions, N.Y. TIMES, Mar. 8, 2009, at BU2, available at http://www.nytimes.com/2009/03/08/business/ 08count.html.
(2.) For example, some teachers allow students to create and update Internet group pages as a way of creating study guides for final exams. See WILL RICHARDSON, BLOGS, WIKIS, PODCASTS, AND OTHER POWERFUL WEB TOOLS FOR CLASSROOMS 64-65 (3d ed. 2010); CYNTHIA HAYNES & JAN RUNE HOLMEVIK, HIGH WIRED: ON THE DESIGN, USE AND THEORY OF EDUCATIONAL MOOS 88-95 (1998).
(3.) See J. ROBERT BROWN, JR., THE REGULATION OF CORPORATE DISCLOSURE [section] 9.01(1) (3d ed. 2009) ("The development of the Internet has revolutionized information production, availability, and dissemination. The increased availability of information has helped to promote transparency, liquidity, and efficiency in our capital markets.").
(5.) See, e.g., Elizabeth Phillips Marsh, Purveyors of Hate on the Internet: Are We Ready for Hate Spam?, 17 GA. ST. U. L. REV. 379, 381-82 (2000) (discussing the underlying factors that allow for these abuses).
(6.) Theresa Howard, Online Hate Speech. Difficult to Police ... and Define, USA TODAY (Oct. 2, 2009), http://www.usatoday.com/tech/webguide/ Internetlife/2009-09-30-hate-speech_N.htm.
(7.) See JOHN NOCKLEBY, Hate Speech, in 3 ENCYCLOPEDIA OF THE AMERICAN CONSTITUTION 1277 (Leonard W. Levy & Kenneth L. Karst eds., 2d ed. 2000).
(8.) Mark Potok, Rage on the Right: The Year in Hate and Extremism, S. POVERTY L. CTR. INTELLIGENCE REP. (2010), http://www.splcenter.org/getinformed/ intelligence-report/browse-all- issues/2010/spring/rage-on-the-right.
(9.) States maintain the primary authority to enact legislation to punish hate crimes, and have done so by increasing penalties for crimes motivated by bias. See James E. Kaplan & Margaret P. Moss, Investigating Hate Crimes on the Internet, CTR. PREVENTION HATE VIOLENCE, 7-8 (Sept. 2003), http://www.partnersagainsthate.org/ publications/investigating_hc.pdf. However, the federal government has also written hate crime laws to supplement State authority by protecting citizens' ability to partake in federally protected activity from interference by hate-motivated activity. Id. at 7 (citing 18 U.S.C. [section] 245) (prohibiting the use, or threat, of force against an individual based on race, color, religion, or national origin, in order to interfere with his or her right to enroll in public school, to enjoy any state benefit, program or service, or to be employed by a private employer or state agency); 18 U.S.C. [section] 247 (prohibiting interference with religious activity based on race or religious affiliation)). Similar to State penalty enhancement statutes, the Hate Crimes Sentencing Act, 28 U.S.C. [section] 994, enhances penalties for federal crimes that are motivated by bias and occur on federally owned property. Id.
(10.) See Rightwing Extremism: Current Economic and Political Climate Fueling Resurgence in Radicalization and Recruitment, DEP'T OF HOMELAND SEC., 4 (2009), http://www.fas.org/irp/eprint/rightwing.pdf.
(11.) Simon Wisenthal Center's Digital Hate and Terrorism 2005 Report Reveals 25% Spike in Hate Sites, SIMON WIESENTHAL CTR. (Mar. 23, 2005), http://www.kintera.org/site/apps/ nlnet/content2.aspx?c=fwLYKnN8LzH&b=4423 619&ct=546389&printmode=1 [hereinafter Simon Wiesenthal Center Report].
(12.) Christopher Wolf, Needed. Diagnostic Tools to Guage the Full Effect of Online Anti-Semitism & Hate, ANTI-DEFAMATION LEAGUE, 2 (June 16, 2004), http://www.adl.org/osce/osce_wolf.pdf.
(13.) See Midwest Shooting Spree Ends with Apparent Suicide of Suspect, CNN (July 5, 1999, 5:09 AM), http://www.cnn.com/US/9907/05/illinois.shootings.02/.
(14.) Suspected Shooter Said His Hate-Filled Leaflets Spoke 'The Truth ', CNN (July 6, 1999, 1:55 AM), http://www.cnn.com/US/9907/06/smith.profile.01/.
(15.) Hate speech has been the "dark side of every advance in mass communications." Remarks by AT&T's James Cicconi on Internet Hate, ANTI-DEFAMATION LEAGUE (Nov. 17, 2008), http://www.adl.org/main_internet/James_Cicconi.htm. Radio, for example, was used by Franklin D. Roosevelt to alleviate the nation's financial fears during the 1930's, and by infamous anti-Semite Charles Coughlin to emphasize these fears in order to spread hate to 40 million listeners. Id.
(16.) The Simon Wiesenthal Center's research has shown that the Internet has become extremely important to extremist groups. Some of these sites gain recruits by allowing their visitors to play games where they "shoot" illegal immigrants, members of the Jewish faith, and black people. Internet Driving Hate Site Surge, BBC NEWS (Apr. 20, 2004, 11:24 AM), http://news.bbc.co.uk/go/pr/fr//2/hi/technology/3641895.stm. See also Tony Perry & Kim Murphy, White Supremacist, 3 Followers Charged with Harassing 4 Officials, L.A. TIMES, Nov. 11, 2000, at A20.
(17.) See Intelligence Files, S. POVERTY L. CTR, http://www.splcenter.org/getinformed/intelligence-files (last visited Apr. 16, 2011) (continually examining and updating list of notable groups that fall into this category).
(18.) See Deconstructing Hate Sites, MEDIA AWARENESS NETWORK, http://www.mediaawareness.ca/english/issues/online_ hate/deconst_online_hate.cfm (last visited Apr. 8, 2011) [hereinafter Deconstructing Hate Sites].
(19.) See Randy Blazak, White Boys to Terrorist Men: Target Recruitment of Nazi Skinheads, 44 AM. BEHAV. SCI. 982, 986-88 (2001); Carolyn Turpin-Petrosino, Hateful Sirens ... Who Hears Their Song? An Examination of Student Attitudes Toward Hate Groups and Affiliation Potential, 58 J. SOC. ISSUES 281, 282-84 (2002).
(20.) See Blazak, supra note 19, at 993-94. See also Tactics for Recruiting Young People, MEDIA AWARENESS NETWORK, http://www.mediaawareness.ca/english/issues/o nline_hate/tactic_recruit_young.cfm (last visited Apr. 16, 2011) [hereinafter Tactics for Recruiting Young People].
(21.) See Tactics for Recruiting Young People, supra note 20.
(22.) Id. Such games include crossword puzzles that include racist terms and content, and cartoon characters which are almost identical, or closely related, to endearing characters on shows such as Sesame Street and Barney. Id.
(23.) See Phyllis B. Gerstenfeld et al., Hate Online: A Content Analysis of Extremist Internet Sites, 3 ANALYSES OF SOC. ISSUES & PUB. POL'Y 29, 30-31 (2003); Tactics for Recruiting Young People, supra note 20.
(24.) Deconstructing Hate Sites, supra note 18.
(25.) Id. (Examples of pseudo-scientific concepts are Phillip Rushton's writings on differences in the physical and intellectual abilities between races, as well as Dr. William Pierce's neo-Nazi fictionalized racialist revolution in The Turner Diaries that allegedly inspired the 1995 Oklahoma City bombing. Revisionists consist largely of people who deny the Holocaust.).
(26.) Tom O'Connor, Hate Sites and Hate-Driven Internet Activity, DR. TOM O'CONNOR, http://www.drtomoconnor. com/3410/34101ect02.htm (last updated Aug. 26, 2010).
(27.) Gerstenfeld et al., supra note 23, at 30-35.
(28.) Id. This misinformation movement on hate sites has also occurred in the mainstream media, and has subsequently aided media figures and politicians in spreading false propaganda about racial and ethnic minorities. Hate and Extremism, S. POVERTY L. CTR., http://www.splcenter.org/what- we-do/hate-and-extremism (last visited Mar. 31, 2011). Thus, in addition to recruiting people to their own cause, they also help political figures that share their hatred to gain popularity under the guise of politics. See id.
(29.) Be Web Aware--Violent and Hateful Content, MEDIA AWARENESS NETWORK, http://www.bewebaware.ca/ english/violent_hateful_content.html (last visited Mar. 31, 2011). "Kids are exposed to a continuum of violence online ranging from sites with cruel and often racist humour, mature-rated movies and video games, real-life scenes of violence on sites like YouTube, to gruesome images on gore sites like rotten.com." Id.
(30.) Hate Websites Continue to Flourish, THE REGISTER (May 10, 2004, 2:01 PM), http://www.theregister.co.uk/2004/ 05/10/hate_websites_flourish/ (citing report by SurfControl, a United Kingdom-based software company that tracks websites for such content).
(31.) Jonathan Silverstein, Racist Video Game Incites Anger, ABC NEWS (May 1, 2006), http://abcnews.go.com/ Technology/story?id=1910119&page=l; Extremists Declare "Open Season" on Immigrants. Hispanics Target of Incitement and Violence, ANTI-DEFAMATION LEAGUE (May 23, 2006), http://www.adl.org/main_ Extremism/immigration_extremists. htm?Multi_page_sections=sHeading_6.
(32.) Silverstein, supra note 31.
(34.) See Press Release, Anti-Defamation League, Growing Proliferation of Racist Video Games Target Youth on The Internet (Feb. 19, 2002), available at http://www.adl.org/PresRele/ Internet_75/4042_72.htm (discussing the rise in prevalence of such games in 2002).
(35.) See Lance Whitney, Wiesenthal Study Details Online Hate, Terror Groups, CNET (Mar. 22, 2010, 10:00 AM), http://news.cnet.com/8301-1023_3-10469814-93.html.
(36.) See id.
(37.) Gerstenfeld et al., supra note 23, at 30-35.
(39.) Id. at 33-36.
(40.) Whitney, supra note 35.
(41.) Id. (According to Rabbi Cooper of the Wiesenthal Center, the hate sites point followers to other sites "like YouTube and LiveLink [that] display videos that purportedly show you how to create a binary explosive, such as the type used by "shoe bomber" Richard Reid in 2001 and 'underwear bomber' Umar Farouk Abdulmutallab in December.").
(44.) Rawlson O'Neil King, Solutions and Policy Combat Spreading Hate, WEB HOST INDUS. REV. (May 21, 2004), http://www.thewhir.com/web-hostingnews/ solutions-and-policy-combat-spreading-hate (citing SurfControl report that indicates a 300 percent increase since 2000).
(45.) Steve Barmazel, #&?!!@*%$!: There Is No Stopping Hate Speech, 15 CAL. LAW. 41, 41 (1995), available at http://www.callawyer.com/clstory.cfm ?pubdt=NaN&eid=26736&evid=1.
(46.) Alexander Tsesis, The Empirical Shortcomings of First Amendment Jurisprudence. A Historical Perspective on the Power of Hate Speech, 40 SANTA CLARA L. REV. 729, 756 n.213 (2000) (quoting Louise Surette, New Laws to Curb Hate on Internet?: Symposium Urges Federal Action, GAZETTE (Montreal), Mar. 24, 1999, at Al2).
(47.) Report of the High Commissioner for Human Rights, On the Use of the Internet for Purposes of Incitement to Racial Hatred, Racist Propaganda and Xenophobia, and on Ways of Promoting International Cooperation in this Area, U. N. World Conference Against Racism, Racial Discrimination, Xenophobia and Related Intolerance, par. 15 (Geneva, June 2001) (citing www.wiesenthal.org).
(48.) This figure includes "sites from every continent and in many languages including Spanish, German, Russian, Japanese and Arabic." Simon Wiesenthal Center Report, supra note 11.
(49.) Tom O'Connor, Hate Sites and Hate-Driven Internet Activity, DR. TOM O'CONNOR, http://www.drtomoconnor.com/3410/34101ect02.htm (last updated Aug. 26, 2010).
(50.) Whitney, supra note 35.
(51.) The Southern Poverty Law Center uses the following definition of a hate group: a group or organization whose "beliefs or practices ... attack or malign an entire class of people, typically for their immutable characteristics." Hate Map: Active U.S. Hate Groups, S. POVERTY L. CTR. http://www.splcenter.org/getinformed/hate-map (last visited Aug. 3, 2010). These groups engage not only in criminal activity, but also in demonstrations such as rallies, marches, and distribution of leaflets and publishings. Id.
(52.) David Holthouse, The Year in Hate, 2008: Number of Hate Groups Tops 900, S. POVERTY L. CTR. (2009), http://www.splcenter.org/getinformed/ intelligence-report/browse-all- issues/2009/spring/the-year-in-hate.
(54.) See Potok, supra note 8.
(55.) See David Holthouse & Mark Potok, The Year in Hate, 2007: Active U.S. Groups Rise to 888 in 2007, S. POVERTY L. CTR. (2008), http://www.splcenter.org/get-informed/intelligence- report/browse-all-issues/2008/spring/the-year-in-hate.
(56.) Theresa Howard, Online Hate Speech: Difficult to Police ... and Define, USA TODAY, Oct. 2, 2009, available at http://www.usatoday.com/tech/webguide/ Internetlife/2009-09-30-hatespeech_N.htm (citing Anti-Defamation League research).
(58.) DEP'T OF HOMELAND SEC., supra note 10, at 2, 3.
(59.) Id. at 4. There is a distinct parallel between the current increase in hate groups--resulting from the economic crisis and illegal immigration--and the 1990's, when right-wing extremism also underwent a resurgence on account of polarizing social issues and criticisms caused by the economic recession of that decade. Id. Specifically prominent was the concern that illegal immigrants, on account of their willingness to accept lower payment for their services, were displacing American citizens of their jobs. Id. at 5. In addition, free trade agreements, opposition to gun control efforts, and social issues like abortion and same-sex marriage were also contributors. Id. at 4-5.
(60.) Id. at 4. See also Stephanie Chen, Growing Hate Groups Blame Obama, Economy, CNN (Feb. 26, 2009, 7:34 PM), http://www.cnn.com/2009/US/02/26/hate. groups.report/index.html. According to Dr. Alvin F. Poussaint, a Harvard Medical School professor of psychiatry, people often join hate groups on account of paranoia concerning a specific group of people, and may react violently to this group. Id. This current paranoia is largely directed at illegal immigrants, who have been blamed for the deteriorating economy on account of their alleged propensity to take out subprime loans. Id. This "scapegoating" has historically occurred "in times of economic distress" due to the psychological tendency to displace frustration during times of vulnerability. Id.
(61.) DEP'T OF HOMELAND SEC., supra note 10, at 2.
(62.) Id. at 8. (Growth of these groups subsided in 1995 after the intense government scrutiny of extremist political groups that resulted from the 1995 Oklahoma City bombing.).
(63.) See id.
(64.) Wolf, supra note 12, at 1-2.
(65.) Id at 4.
(66.) Id. at 6-7.
(67.) See id. at 5-6. Moreover, the Internet and related encryption technologies have stifled law enforcement's ability to deter or prevent violence by allowing these extremist groups to communicate and network--both domestically and internationally--with each other. Id.
(68.) Uniform Crime Reports, FED. BUREAU INVESTIGATION, http://www.fbi.gov/ucr/ucr.htm (last visited Oct. 13, 201) (posting hate crime statistics, organized by various factors, from 1995 to 2008). In 2008, 7,783 hate crime incidents involving 9,168 offenses were reported to the FBI's Uniform Crime Reports, while 7,947 were reported in 1995. Id.
(69.) See id. The Uniform Crime Reports designate two types of hate crimes: crimes against person and crimes against property. Crimes against persons include murder, negligent manslaughter, rape, aggravated assault, simple assault, and intimidation. Crimes against property include robbery, burglary, larceny, arson, and vandalism. Id. Intimidation is the only one of these crimes that would fairly encompass offenses committed on the Internet, for even the most heinous Internet remark or website would likely not rise to the level of assault since it was not made in the physical presence of the victim. Importantly, intimidation offenses generally comprise approximately thirty percent of the reported hate crimes in a given year. Based on this number, there are fewer reported instances of racial intimidation than there are hate websites. See id. As this seems counterintuitive, it lends general credence to the government's assessment that there are ten times more hate crimes per year than those that are reported.
(70.) See Chen, supra note 60.
(71.) Joel Connelly, Sharp Increase in Hate Crimes "Against Every Group," SEATTLE PI (Feb. 24, 2010, 10:00 PM), http://www.seattlepi.com/national/article/ Sharp-increase-in-hate-crimes-against-every-882986.php. Specifically, hundreds of incidents of racially-motivated abuse and intimidation were reported in the three weeks after the November Presidential election. Matthew Bigg, Election of Obama Provokes Rise in U.S. Hate Crimes, REUTERS, Nov. 24, 2008, available at http://www.reuters.com/article/2008/ 11/24/us-usa-obama-hatecrimesidUSTRE4AN81U20081124.
(72.) Examples of this phenomenon are "hacktivism" and "cyberterrorism:" hundreds of thousands of international groups have become Internet literate in order to use it as a "weapon of control, domination, terrorism, and destruction--a weapon that has already reached to almost all comers of the world." Shahid M. Shahidullah, Federal Laws and Judicial Trends in the Prosecution of Cyber Crime Cases in the United States. First and Fourth Amendment Issues, 45 CRIM. L. BULL. 929 (2009).
(74.) See, e.g., ALEXANDER MEIKLEJOHN, FREE SPEECH AND ITS RELATION TO SELF-GOVERNMENT 27 (1948).
(75.) See, e.g., New York Times Co. v. Sullivan, 376 U.S. 254, 269-70 (1964) (specifically deeming the ability to criticize government as the principle meaning of the First Amendment while discussing its separate issue, the amount of protection provided to defamatory speech).
(76.) Most appropriate to this note is the Supreme Court's discussion, in Reno v. ACLU (Reno 11), 521 U.S. 844, 885 (1997), of whether or not the Internet is the "new marketplace of ideas." This concept is rooted in Justice Oliver Wendell Holmes' dissenting opinion in Abrams v. United States:
[M]en ... may come to believe even more than they believe the very foundations of their own conduct that the ultimate good desired is better reached by free trade in ideas--that the best test of truth is the power of the thought to get itself accepted in the competition of the market, and that truth is the only ground upon which their wishes safely can be carried out. That at any rate is the theory of our Constitution.
250 U.S. 616, 630 (1919) (Holmes, J., dissenting). The doctrine has felt a marked resurgence in Supreme Court decisions and opinions since the 1970's. The phrase was mentioned 27 times in Supreme Court opinions before 1970, and 99 times from 1970 to 1996. W. Wat Hopkins, The Supreme Court Defines the Marketplace of Ideas, 73 JOURNALISM & MASS COMM. Q. 40, 42 (1996), available at http://www.comm.umd.edu/faculty/tpg/HopkinsWeekSeven.pdf. Most recently, the Supreme Court used this doctrine in holding that the government may not suppress political speech--in this case a nonprofit organization's video criticizing Hillary Clinton before the 2008 Presidential election--on the basis that the speaker is a corporation. See Citizens United v. Federal Elections Comm'n, 130 S.Ct. 876, 886-88 (2010).
(78.) See Whitney v. California, 274 U.S. 357, 375, 377 (Brandeis, J., concurring) ("If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the process of education, the remedy to be applied is more speech, not enforced silence."). See also Marsh, supra note 5, at 391-92 (questioning whether the generally accepted constitutional principle that the remedy for speech is more speech is not properly applied in the Internet context).
(79.) See N.Y. Times Co., 376 U.S. at 279 n.19 ("Even a false statement may be deemed to make a valuable contribution to public debate, since it brings about 'the clearer perception and livelier impression of truth, produced by its collision with error." (quoting JOHN STUART MILL, ON LIBERTY, 15 (1947))).
(80.) See, e.g., R.A.V. v. City of St. Paul, 505 U.S. 377 (1992).
(81.) See Turner Broad. Sys. v. F.C.C. 512 U.S. 622, 640 (1994).
(82.) Whitney, 274 U.S. at 375 (Brandeis, J., concurring) (emphasis added).
(83.) See id.
(85.) LAURENCE H. TRIBE, AMERICAN CONSTITUTIONAL LAW 786 (2d ed. 1998).
(86.) Alexander Tsesis, Hate in Cyberspace: Regulating Hate Speech on the Internet, 38 SAN DIEGO L. REV. 817, 840-41 (2001).
(87.) For example, many commentators have posited that the Nazi Holocaust was perpetrated by normal citizens who were indoctrinated by years of education and racist literature, after popular culture eventually accepted anti-Semitism and agreed to kill and enslave Jewish people. Id. at 840-41 n.161 (citing DANIEL JONAH GOLDHAGEN, HITLER'S WILLING EXECUTIONERS: ORDINARY GERMANS AND THE HOLOCAUST (1997); JOHN WEISS, IDEOLOGY OF DEATH: WHY THE HOLOCAUST HAPPENED IN GERMANY (1996)).
In the long run, true ideas tend to drive out false ones. The problem is that the short run may be very long ... [and] we may become overwhelmed by the inexhaustible supply of freshly minted, often very seductive, false ideas.... Genocide is an example.... Truth may win, and in the long run it may almost always win, but millions of Jews were deliberately and systematically murdered in a very short period of time.... Before those murders occurred, many individuals must have come to 'have false beliefs.'
Harry Wellington, On Freedom of Expression, 88 YALE L.J. 1105, 1130-32 (1979).
(89.) Systematic oppression takes time, but has more of an impact than single acts of violence perpetrated after uttering racist comments, Id. "There is a close, and virtually necessary, connection between advocacy, preparation, coordination, infrastructure development, training, indoctrination, desensitization, discrimination, singular violent acts, and systematic oppression." GORDON W. ALLPORT, THE NATURE OF PREJUDICE 57 (1979).
(90.) See First Nat'l Bank of Boston v. Bellotti, 435 U.S. 765, 804 (1978) (White, J., dissenting) ("some have considered to be the principal function of the First Amendment, the use of communication as a means of self-expression, self-realization, and self-fulfillment....").
(91.) Recognizing a speaker's place in society, as well as the fact that words can invoke a violent response because of their potential to injure, the Court in Chaplinsky v. New Hampshire, 315 U.S. 568 (1942), expressly recognized a "fighting words" category of unprotected speech. The Court upheld a conviction of a Jehovah's Witness who demonstrated on the street that all other religions are a "racket." Chaplinsky v. New Hampshire, 315 U.S. 568, 569-71 (1942). The court did not reach its holding because peoples religious views are so sensitive:
[T]he right of free speech is not absolute at all times and under all circumstances. There are certain well-defined and narrowly limited classes of speech, the prevention of which have never been thought to raise any constitutional problem. These include the lewd and obscene, the profane, the libelous, and the insulting or fighting words--those by which their very utterance inflict injury or tend to incite an immediate breach of the peace.
Id. at 571-72.
(92.) See N.Y. Times Co. v. Sullivan, 376 U.S. 254, 301-03 (1964) (developing series of rules, regarding subject matter of speech and the identity of the allegedly defamed plaintiff, to determine when First Amendment permits defamatory speech, and when it does not).
(93.) See Roth v. United States, 354 U.S. 476, 487-90 (1957) (holding that sexual speech, appealing to the "prurient interest" is outside the area of constitutionally protected speech). See also Miller v. California, 413 U.S. 15, 36 (1973) (establishing a test for determining if the speech in question is obscene and outside constitutional protection).
(94.) See supra text accompanying notes 13-14.
(95.) See supra Part I.A.3.
(96.) 505 U.S. 377 (1992) (analyzing cross burning, a form of hate speech, under the fighting words exception to First Amendment protection).
(97.) 538 U.S. 343 (2003).
(98.)343 U.S. 250 (1952).
(99.)Id. at 258.
(100.) Id. at 251, 267.
(101.) Id. at 257 (quoting Cantwell v. Connecticut, 310 U.S. 296, 309-310 (1940)). The Court also alluded to a state's "police power" in R.A.V, despite not mentioning the phrase explicitly, emphasizing that a state should be permitted to punish a libelous utterance made at a group in the same way that it can against an individual, unless a court deems it a "willful and purposeless restriction unrelated to the peace and well-being of the State." 343 U.S. at 258. This language mimics minimal, rational basis scrutiny, where a state is permitted to protect the well being of its citizens by any means that do not trigger a reason to suspect arbitrary decisions or questionable motivations. Here in 1952, the Court believed that restricting hate speech was within a State's proper role as a sovereign distinct from the federal government and courts.
Worth noting is that Justice Blackmun's concurring opinion in R.A.V. v. City of St. Paul seems to retain police power undertones:
I see no First Amendment values that are compromised by a law that prohibits hoodlums from driving minorities out of their homes by burning crosses on their lawns, but I see great harm in preventing the people of Saint Paul from specifically punishing the race-based fighting words that so prejudice their community.
505 U.S. at 416 (Blackmun, J., concurring). The hate speech in question seemed to raise a safety issue that Blackmun believed the municipality should have the right to police; nevertheless, the statute's method of doing so was overbroad to him just as it was to the Majority.
(102.) ERWIN CHEMERINSKY, CONSTITUTIONAL LAW: PRINCIPLES AND POLICIES 1011-13 (3rd ed. 2006). Although Beauharnais has not been overruled, Professor Chemerinsky states that the Court's subsequent decision in National Socialist Part of America v. Village of Skokie illustrates that the case no longer has weight as good law. Id. at 1012 (citing Skokie v. Village of Skokie, 432 U.S. 43, 44 (1977)).
(103.) Kenneth L. Karst, Boundaries and Reasons: Freedom of Expression and the Subordination of Groups, 1990 U. ILL. L. REV. 95, 98 (1990) (drawing the conclusion, however, that excluding such speech would give the government too much authority not only to decide what "Reason" is, but also to exclude outgroups from this conversation later under the same principle). Professor Karst references three Justices as arguing for a group libel exception: Justice Stevens in FCC v. Pacifica Foundation, 438 U.S. 726 (1978), and Young v. American Mini Theaters, Inc., 427 U.S. 50 (1976); Chief Justice Berger in Clark v. Community for Creative Non-Violence, 468 U.S. 288 (1984); and Justices Powell in Dunn and Bradstreet, Inc. v. Greenmoss Builders, Inc., 472 U.S. 749 (1985). Id. at 98 n.9. Karst also cites various commentators for this proposition, Id. (citing C. MACKINNON, FEMINISM UNMODIFIED chs. 13, 14 (1987)).
(104.) See R.A.V., 505 U.S. at 380-91.
(105.) Id. at 380.
(106.) Id. at 391.
(108.) Id. at 388-89.
(109.) Id. at 389 (citing City of Renton v. Playtime Theaters, 475 U.S. 41 (1986)). Although this exception is the topic of Part II.B of this note, the analysis finds its roots not in hate speech cases, but in a case discussing lower levels of protection for "obscene" speech.
(110.) City of Renton v. Playtime Theaters, 475 U.S. 41, 44 (1986).
(111.) R.A.V., 505 U.S. at 388. ("[W]hen the basis for the content discrimination consists entirely of the very reason the entire class of speech at issues is proscribable, no significant danger of idea or viewpoint discrimination exists.").
(112.) Virginia v. Black, 538 U.S. 343,357-61 (2003).
(114.) Id. at 359-60
(116.) See id.
(117.) See id. at 361-63.
(118.) See discussion infra Part II.A.
(119.) Christopher Wolf, Chairman, Internet Task Force, Anti-Defamation League, Remarks at 3rd International Symposium on Hate on the Internet (Sept. 12, 2006), available at http://www.adl.org/main_Internet/Internet_hate_law.htm. Wolf describes the general reaction to websites that deny the Holocaust and advocate virulent anti-Semitism, portray homosexuals as subhuman, and use racial epithets and caricatures: "Audience members almost always have the same reaction to what they see. When they are finished shaking their heads in disbelief and after they say 'disgusting,' audience members frequently are heard to exclaim: 'There oughta be a law.'" Id.
(120.) See, e.g., Mark C. Alexander, The First Amendment and Problems of Political Viability: The Case of Internet Pornography, 25 HARV. J.L & PUB. POL'Y 977, 988 (2001-02).
(121.) See Karst, supra note 103, at 103-04 (describing the approach and arguing it does not work for women, gays, and racial minorities).
(122.) See supra Part I.B. 1.
(123.) Communications Decency Act, 47 U.S.C. [section] 223(e) (1996). Although this immunity is not absolute, there is no provision limiting it in hate speech cases, and it is commonly settled that ISPs can successfully invoke [section] 230 immunity against speech-based claims such as defamation. See Immunity for Online Publishers Under the Communications Decency Act, CITIZEN MEDIA LAW PROJECT, http://www.citmedialaw.org/legal-guide/immunity-online- publishers-under-communications-decency-act (last updated Feb. 18, 2011). Furthermore, ISPs are immune from claims of fraud, obscenity, assault, harassment, and other similar causes of action. Enrico Shaefer, Immunity For Internet Service Providers Under the Communications Decency Act (CDA), EZINE ARICLES, http://ezinearticles.com/?Immunity-For-Internet- Service-Providers-Under-the-Communications- Decency-Act-(CDA)&id=2470403 (last visit Oct. 13, 2010). On the other hand, federal law specifically targets the storing of copyright-infringing information online. ISPs that store copyright-infringing information on their servers are only provided immunity if they meet three specific statutory elements. See Digital Millennium Copyright Act, Pub. L. No. 105-304, 112 Stat. 2860 (1998) (codified at 17 U.S.C. [section] 512(c)(1)(A) to (C) (2006)). Liability under this Act is only precluded if the ISP does not have actual or constructive knowledge of the infringing material or activity, does not receive a financial benefit form infringing activity, and responds to notification of an infringement claim by quickly removing or disabling access to the material, Id.
(124.) Since ISPs willingly host websites regardless of their content for modest charges, websites "that contain bias or prejudice based on race, religion, ethnicity, gender, disability, and sexual orientation have taken full advantage of the low-cost opportunity to spread their messages 24 hours a day to millions of people at an instant." JAMES E. KAPLAN & MARGARET P. MOSS, CTR. FOR THE PREVENTION OF HATE VIOLENCE, INVESTIGATING HATE CRIMES ON THE INTERNET (2003) (citing KESSLER, J., ANTI-DEFAMATION LEAGUE, POISONING THE WEB: HATRED ONLINE, AN ADL REPORT ON INTERNET BIGOTRY, EXTREMISM AND VIOLENCE (1999)).
(125.) A. Knoll, Any Which Way But Loose. Nations Regulate the Internet, 4 TUL. J. INT'L & COMP. L. 275, 287-88 (1996). In particular, Otto Schily, the Minister of the Interior of Germany, estimated in 2001 that approximately 90% of neo-Nazi materials posted on the Internet by German citizens were hosted by ISPs in the United States. See Report of the High Commissioner for Human Rights, supra note 47, at 5 n.9.
(126.) See Ronald J. Rychlak, Compassion, Hatred, and Free Expression, 27 MISS. C. L. REV. 407, 423 (2008) (citing Censors trying to harness the Net, ZGRAM (June 23, 2004), http://www.zundelsite.org/english/zgrams/zg2004/2004-June/000886.html). The United States is also responsible for the highest percentage of cyber-crimes as well. For example, in 2007, the United States had the highest percentage of the world's bot-infected computers (14%), was the target of 56% of the world's denial-of-service attacks, and had a 146% increase in reported cases of malicious activity from July to December 2007 (the total number was 499,811). See SYMANTEC, SYMANTEC INTERNET SECURITY THREAT REPORT: TRENDS FOR JULY-DEC. 2007 19-21 (2008), available at http://www.symantec.com/security_response/whitepapers.jsp.
(127.) Rychlak, supra note 126, at 422.
(128.) See generally Wolf, supra note 12. Particularly after September 11, 2001, Congress recognized that cyber-crime must be addressed as it pertains to national security. See Shahidullah, supra note 72. As such, the USA PATRIOT Act of 2001 was enacted--amending the 1986 Electronic Communications Act and the 1984 Computer Fraud and Abuse Act--to provide more authority to law enforcement to investigate cyber crimes, Id. Specifically, the PATRIOT Act requires ISPs to disclose IP addresses--and other stored communication--to law enforcement when requested, Id. Other pertinent federal laws include the Cyber Security Research and Development Act of 2002, the 21st Century Department of Justice Reauthorization Act of 2002, E-Government Act of 2002, and, the Intelligence Reform and Terrorism Protection Act of 2004. Id.
(129.) Council Directive 2000/31/EC, 2000 O.J. (L 178) (EC), available at http://eurlex.europa.eu/LexUriServ/ LexUriServ.do?uri=CELEX:32000L0031:EN:HTML. This "Directive on Electronic Commerce" grants immunity for content that is hosted, as long as the provider has neither been informed of its illegality nor failed to act promptly if it has been informed, Id. at art. 14. More generally, however, the Directive forces its Member States to "prohibit any kind of interception or surveillance of such communications by others than the senders and receivers, except when legally authorised." Id. at art. 15.
(130.) See Kathleen E. Mahoney, Hate Speech: Affirmation or Contradiction of Freedom of Expression, 1996 U. ILL. L. REV. 789, 803 (1996).
(131.) Tsesis, Hate in Cyberspace, supra note 86, at 858 (citing Charles J. Ogletree, Jr., The Limits of Hate Speech: Does Race Matter?, 32 GONZ. L. REV. 491, 501 (1996-97)).
(132.) See Rychlak, supra note 126, at 422-23.
(133.) Mari Matsuda, Public Response to Racist Speech. Considering the Victim's Story, 87 MICH. L. REV. 2320, 2341 (1989). Specifically, a 1994 International Convention posits that member states:
(a) Shall declare as an offence punishable by law all dissemination of ideas based on racial superiority or hatred, incitement to racial discrimination, as well as all acts of violence or incitement to such acts against any race or group of persons of another colour or ethnic origin, and also the provision of any assistance to racist activities, including the financing thereof;
(b) Shall declare illegal and prohibit organizations, and also organized and all other propaganda activities, which promote and incite racial discrimination, and shall recognize participation in such organization or activities as an offence punishable by law; [and]
(c) Shall not permit public authorities or public institutions, national or local, to promote or incite racial discrimination. International Convention on the Elimination of All Forms of Racial Discrimination, art. 4, Jan. 4, 1969, 660 U.N.T.S. 195.
(134.) Rychlak, supra note 126, at 422 (citing Michelle Madigan, Internet Hate-Speech Ban Called "Chilling," Council of Europe's Internet Restrictions Raise Uneasy Questions About Civil Rights Online, PCWORLD (Dec. 2, 2002), http://www.pcworld.com/article/id,107499-page,1/article.html.
(135.) Additional Protocol to the Convention on Cybercrime, Concerning the Criminalisation of Acts of a Racist and Xenophobic Nature Committed Through Computer Systems, European Council, Jan. 28, 2003, available at http://conventions.coe.int/treaty/en/treaties/html/189.htm [hereinafter Additional Protocol]. This Additional Protocol imposed on European states the obligation to criminalize the following things conducted through computer systems: (1) the dissemination of racist and xenophobic material; (2) making racist and xenophobic threats; (3) making racist and xenophobic motivated insults; (4) engaging in revisionism; (5) aiding and abetting in the above four offenses. Id. at art 3-7. It is also important to note that this only involved public material as opposed to private communications between individuals, Id.
(136.) Rychlak, supra note 126, at 420 (citing Christopher Wolf, A Comment on Private Harms in the Cyber-World, 62 WASH. & LEE. L. REV. 355 (2005))
(137.)See generally Additional Protocol, supra note 135.
(138.)Rychlak, supra note 126, at 419-20.
(139.)Id. at 419-24.
(140.)Id. at 424.
(142.) Tsesis, Hate in Cyberspace, supra note 86, at 859 ("The United States Supreme Court's short-sightedness is, therefore, causing waves around the world. In effect, United States jurisprudence, along with the incitement and danger to democracy attached to it, makes it more difficult for other countries to eliminate hate speech.").
(143.) Germany, Canada, and the United Kingdom have notably restricted the transmission of racist Internet materials. Germany's guarantee of free expression, both generally and through the Internet, is limited by general legislation statutes protecting the youth, and the right to self-respect. GRUNDGESETZ FOR DIE BUNDESREPUBLIK DEUTSCHLAND [GRUNDGESETZ] [GG] [BASIC LAW] art. 5.2 (Ger.). Specifically, individuals and groups are criminally liable for assaulting another's dignity if they (1) incite others to hate particular groups of the population; (2) advocate violent or arbitrary measures against these groups; or (3) slander them as a whole. Tsesis, Hate in Cyberspace, supra note 86, at 862 (citing STRAFGESETZBUCH [STGB] [PENAL CODE], Nov. 13, 1998, art. 130 (Ger.)). To enforce Canada's general laws against hate propaganda, the Human Rights Act has codified its Supreme Court's jurisprudence--that regulation of hate speech is legitimate because it harms not only the individual victims themselves, but also society as a whole--to prohibit people or groups from using telecommunications to spread hatred or incite discrimination against any identifiable group. See Tsesis, Tsesis, Hate in Cyberspace, supra note 86, at 860 (citing Canadian Human Rights Act, R.S.C., ch. H-6, [section] 13(1) (1985)). In the United Kingdom, the Racial and Religious Hatred Act of 2006 states: "A person who uses threatening words or behavior, or displays any written material which is threatening, is guilty of an offence if he intends thereby to stir up religious hatred." Racial and Religious Hatred Act, 2006, C.1, [section] 29B (U.K.). The Criminal Justice and Immigration Act of 2008 later criminalized inciting hatred on the basis of sexual orientation. Criminal Justice and Immigration Act, 2008, C.4, [section] 74, Sch. 16, [paragraph]14 (U.K.).
(144.) Christopher Wolf, Chairman of the Anti-Defamation League Internet Task Force, described this problem at the 3rd International Symposium on Hate on the Internet:
The borderless nature of the Internet means that, like chasing cockroaches, squashing one does not solve the problem when there are many more waiting behind the walls--or across the border. Many see prosecution of Internet speech in one country as a futile gesture when the speech can re-appear on the Internet almost instantaneously, hosted by an ISP in the United States.
Wolf, supra note 119.
(145.) See James Cicconi, Keynote Address at the Global Summit on Internet Hate Speech (Nov. 17, 2008), available at http://www.adl.org/main_Internet/James_Cicconi.htm.
(146.) Stephanie Berrong, Internet Hate: A Tough Problem to Combat, SECURITY MGMT., Mar. 1, 2009, http://www.securitymanagement.com /article/Internet-hate-tough-problem-combat-005259.
(147.) See R.A.V. v. City of St. Paul, 505 U.S. 377, 379-84 (1992).
(148.) 438 U.S. 726 (1978).
(149.) Id. at 748.
(150.) See Red Lion Broad. Co. v. FCC., 395 U.S. 367, 399-400 (1969).
(151.) See Pacifica, 438 U.S. at 748.
(152.) Pacifica Found. v. FCC, 556 F.2d 9, 37 n.18 (D.C. Cir. 1977), rev'd, 438 U.S. 726 (1978).
(153.) Pacifica, 438 U.S. at 769.
(154.) See id.
(155.) See Sable Communications of California, Inc. v. FCC, 492 U.S. 115, 128-31 (1989).
(156.) Reno II, 521 U.S. at 868-70.
(157.) Id. at 868-69 & n.33.
(158.) As of Apr. 8, 2011.
(159.) Hate Sites, Their Hosts & Freedom of Speech, CHEAP HOSTING DIRECTORY (Sept. 24, 2009), http://www.cheaphostingdirectory.com /hate-sites-their-hosts-freedom-of-speech/.
(160.) As of Apr. 8, 2011.
(161.) Chang-Hoan Cho & Hongsik John Cheon, Children's Exposure to Negative Internet Content. Effects of Family Context, 49 J. BROAD. & ELECT. MEDIA 488, 489 (2005), available at http://www.allbusiness.com/technology/ Internet-technology/867440-1.html.
(162.) See generally Tactics for Recruiting Young People, supra note 20.
(165.) Research of Internet Downside Issues, INTERNET ADVISORY BD., iii-v (Aug. 2001), available at http://www.Internetsafety.ie/ website/ois/oisweb.nsf/0/F970D473024C7B2E80257 4C5004DFB69/$File/am%C3%A1rach%20con.%20research%20of%20Internet% 20downside%20issues%20Aug%2001.pdf.
(166.) Computer and Internet Use in the United States: October 2007, Reported Internet Usage for Individuals 3 Years and Older, by Selected Characterists, U.S. CENSUS BUREAU (June 2009), http://www.census.gov/population/www/socdemo/computer/2007.html. As of 2003, approximately 91% of children from nursery school through twelth grade used computers, and approximately 59% of these children used the Internet. MATHEW DEBELL & CHRIS CHAPMAN, U.S. DEPARTMENT OF EDUCATION, INSTITUTE OF EDUCATION, SCIENECES, COMPUTER AND INTERNET USE BY STUDENTS IN 2003: STATISTICAL ANNALYSIS REPORT, Table 5 (Sept. 2006), available at http://nces.ed.gov/pubs2006/2006065.pdf. When comparing the U.S. Department of Education's report of child Internet use in 2003 with the U.S. Census Bureau's report on 2007, the percentage of children accessing the Internet in 2003 was approximately 3% higher than in 2007. Compare DeBell & Chapman (Sept. 2006) with U.S. Census Bureau (June 2009).
While this use may have decreased during this time period, the most likely explanation is that the Census Bureau included 3 and 4 year olds in their 2007 data, while the Department of Education did not. 3 and 4 year olds are less likely to use the Internet than children ages 5 to 17. The author of this note estimates (erring conservatively on the side of the Department of Education) that the percentage of 5 to 17 year olds that used the Internet in 2007, according to the Census data, is 58.6%. This number was calculated as follows: there were 3,691,000 more children counted by the Census in 2007 than the Department of Education in 2004; it was assumed that this number was the age group of 3 and 4 years olds, since that was the difference in sample; according to the Department of Education, the percentage of Internet use in the age group closest to 3 and 4 years olds (nursery school) was 23% in 2003; using that closest known percentage, the number of children ages 3 and 4 that used the Internet in 2007 was 848,930. Subtracting the 3,691,000 children aged 3 and 4 from the 2007 census, as well as the 848,930 that would have used the Internet, the author is left with an estimate of 58.6%.
(167.) See DeBell & Chapman, supra note 166, at Table 6. According to 2003 statistics, 35% of students from first to fifth grade used computers to complete school assignments in the home, and 34% to connect to the Internet there, Id. These numbers rose for students in the sixth to eighth grade, where 62% used computers to complete school assignments, and 54% to connect to the Internet. Id. Even higher was the rate of use for students in the ninth to twelfth grade, where 69% used computers to complete school assignments, and 64% to connect to the Internet. Id.
(168.) Eric C. Newburger, Home Computers and Internet Use in the United States: August 2000, SPECIAL STUDIES (U.S. Census Bureau), 4 (Sept. 200l), http://www.census.gov/prod/2001pubs/p23-207.pdf.
(170.) Cho & Cheon, supra note 161, at 490 (citing ELLEN A. WARTELLA, ET. AL, CHILDREN AND INTERACTIVE MEDIA: RESEARCH COMPENDIUM UPDATE, 5 (Nov. 2002), http://www.policyarchive.org/ handle/10207/bitstreams/16017.pdf.
(171.) Anne Marie Kelly, Five Things You Should Know About Kids and the Internet, MEDIAPOST COMM., (Feb. 10, 2009 1:01 PM), http://www.mediapost.com/publications/index.cfm? fa=Articles.showArticle&art_a id=100062 (citing CFK MEDIAMARK RESEARCH & INTELLIGENCE, 2008 AMERICAN KIDS STUDY).
(172.) Public Schools and Instructional Rooms with Internet Access, by Selected School Characteristics. Selected Years, 1994 through 2005, U.S. DEP'T EDUC., NAT'L CTR. FOR EDUC. STATISTICS (July 2007), http://nces.ed.gov/programs/ digest/d07/tables/dt07_413.asp.
(174.) Id. This specific increase was due, in part, to an enormous federal government spending increase to wire schools and libraries; in 1998, the Clinton administration committed a $1.9 billion subsidy to install Internet access in schools. Children Online (1998)--Statistics, MEDIA AWARENESS NETWORK, http://www.media-awareness.ca/english/ resources/research_documents/statistics/Internet/children_on line.cfm (last visited Apr. 8, 2011) (citing Jupiter Communications, THE INDUSTRY STANDARD (December 4, 1998)).
(175.) U.S. Department of Education, supra note 166.
(176.) DeBell & Chapmen, supra note 166, at Table 8A.
(178.) Id. at Table 6.
(179.) FCC v. Pacifica, 438 U.S. 726, 773 (1978).
(180.) Pacifica, 438 U.S. at 748-49 (citing Rowan v. U.S. Post Office Dep't., 397 U.S. 728 (1970).
(181.) See Reno II, 521 U.S. at 869.
(182.) ACLU v. Reno (Reno I), 929 F. Supp. 824, 830-849 (E.D. Pa. 1996), aff'd, 521 U.S. 844 (1997).
(183.) Reno II, 521 U.S. at 863 (quoting Reno I, 929 F. Supp. at 824).
(184.) Reno II, 521 U.S. at 869 (emphasis added).
(185.) Reno I, 929 F. Supp. at 876 n.19
(186.) Id. at 854 (emphasis added) (quoting Reno I, 929 F. Supp. at 844-45).
(187.) Spamdexing "is the deliberate manipulation of search engine indexes." Spamdexing, WIKPEDIA, http://en.wikipedia.org/wiki/Spamdexing (last visited Apr. 17, 2011).
In evaluating textual relevance, search engines consider where on a web page query terms occurs. Each type of location is called a field. The common text fields for a page ... are the document body, the title, the meta tags in the HTML header, and [the] page ... URL. In addition, the anchor texts associated with URLs that point to [the page] are also considered belonging to [it] (anchor text field), since they often describe very well [its] content.... The terms in [the page's] text fields are used to determine [its] relevance ... with respect to a specific query (a group of query terms), often with different weights given to different fields. Term spamming refers to techniques that tailor the contents of these text fields in order to make spam pages relevant for some queries.
Zoltan Gyongyi & Hector Garcia-Molina, Web Spam Taxonomy, AIRWEB FIRST INTERNATIONAL WORKSHOP ON ADVERSARIAL INFORMATION RETRIEVAL ON THE WEB, 2 (May 10-14, 2005), http://airweb.cse.lehigh.edu/2005/gyongyi.pdf.
(188.) "Google bombing," on the other hand, is a form of search engine result manipulation achieved by creating and placing hyperlinks on pages that affect other websites' rankings in search engine results. See Tom Zeller, Jr., A New Campaign Tactic: Manipulating Google Data, N. Y. TIMES, Oct. 26, 2006, at A.20, available at http://www.nytimes.com/ 2006/10/26/us/politics/26googlebomb.html; Clifford Tatum, Deconstructing Google Bombs: A Breach of Symbolic Power or Just a Goofy Prank?, FIRST MONDAY (Oct. 3, 2005), http://firstmonday.org/htbin/cgiwrap/bin/ ojs/index.php/fm/article/view/1287/1207 (analyzing the tactic as a social phenomenon motivated by political affiliation and goals). "Googlewashing" is a tactic of manipulating media to affect a term's perception, or to eliminate competition from search engine results pages. Andrew Orlowski, Anti-war slogan coined, repurposed and Googlewashed ... in 42 days, THE REGISTER, Apr. 3, 2003, http://www.theregister.co.uk/2003/ 04/03/antiwar_slogan_coined_repurposed/.
(189.) Cf. Filtering Ineffective on Web 2.0 and Mobile Content, THE EU IN MALTA (Jan. 21, 2011, 3:43 PM), http://ec.europa.eu/malta/news/ safe_internet_mt.htm.
(190.) Reno II, 521 U.S. at 854 (quoting Reno 1, 929 F. Supp. at 844-845).
(191.) At the District Court level, the following factual finding was made:
America Online (AOL), Microsoft Network, and Prodigy all offer parental control options free of charge to their members. AOL has established an online area designed specifically for children. The "Kids Only" parental control feature allows parents to establish an AOL account for their children that accesses only the Kids Only channel on America Online.
Reno I, 929 F. Supp. at 842.
(192.) FCC v. Pacifica, 438 U.S. 726, 748 (1978).
(193.) See, e.g., Daniel Zeng & Huiqian Li, Hon, Useful Are Tags?--An Empirical Analysis of Collaborative Tagging for Web Page Recommendation, in Lecture Notes in Computer Science 5075 (David Hutchson et al. Eds., 2008).
(194.) Reno II, 521 U.S. at 881. The District Court also emphasized that a "tagging" plan carried an unfair and impossible burden for web providers: "providers must review all of their material currently published online, as well as all new material they post in the future, to determine if it could be considered 'patently offensive' in any community nationwide." Reno I, 929 F. Supp. at 856.
(195.) Before recent developments, Generic Top Level Domains (gTLDs) consisted of the core group of domains--for example, .com, .info.,.net, and .org--as well as infrastructure and country-code domains. See Generic Top-level Domain, WIKIPEDIA, http://en.wikipedia. org/wiki/Generic_top-level_domain (last visited Apr. 17, 2011). See generally New gTLD Draft Applicant Guidebook Version 3: Public Comments Summary and Analyis, ICANN, (Feb. 15, 2010), http://www.icann.org/en/topics/ new-gtlds/summary-analysis-agv3-15feb10-en.pdf. According to recent endeavors of the Internet Corporation for Assigned Names and Numbers (ICANN), corporations may apply for and bid on new gTLDs to replace their current domain with any combination of 64 characters. Matthew Humphries, Generic Top-Level Domain RollLOut Delayed by ICANN, GEEK.COM (FEB. 19, 2009), http://www.geek.com/ articles/news/generic-top-level-domain -roll-out-delayed-by-icann-20090219/.
(196.) See .XXX Adult Entertainment Domain Name Gets the Go Ahead, ICM REGISTRY, http://www.icmregistry. com/news/welcomeapproval.php (last visited Apr. 12, 2011); 18 March 2011 Draft Rationale for Approving Registry Agreement With ICM's for XXX sTLD, ICANN, (Mar. 18, 2011), http://www.icann.org/en/minutes/ draft-icm-rationale-18mar11-en.pdf.
(197.) Content-control software can either be purchased and used by a user on his or her private computer, or implemented by an ISP. Many companies sell content-control software: for example, "Netnanny" operates to block certain websites or terms from a child's Internet activity. Net Nanny[TM] 6.5 New Features, NET NANNY, http://www.netnanny. com/products/netnanny (last visited Apr. 10, 2011). In 2006, "more than half of US families" with children who access the Internet used content-control software, and there were more than 12 million copies of this software in use. Joris Evers, Windows Live to Get Censorware, ZDNET (Mar. 14, 2006), http://www.zdnet.co.uk/ news/securitymanagement/2006/03/14/windows- live-to-get-censorware-39257292. An example of such filters catching objectionable material is the newly authorized general top-level domain, ".xxx," which labels pornography websites in such a way that private users can filter them out. .XXX Adult Entertainment Domain Name Gets the Go Ahead, supra note 196; .XXX FAQ, ICM REGISTRY, http://www.icmregistry.com/about/ faq.php (last visited Apr. 12, 2011).
Another option is to block access to Internet areas that have already been identified as hate sites. The Anti-Defamation League, a U.S. non-profit human-rights group, has developed a product called HateFilter that specifically targets several hundred such Web sites. Protecting Children and Teens from Online Hate, MEDIA AWARENESS NETWORK, http://www.mediaawareness.ca/english/ issues/online_hate/protect_child_hate.cfm (last visited Mar. 30, 2011).
(198.) Indranath Gupta, '.XXX' Sponsored Top-Level Domain--Is It a Solution to Curb Child Abuse Due to Internet Pornography?, SCRIPTED (2005), http://www.law.ed.ac.uk/ahrc/ SCRIPT-ed/vol2-3/gupta.asp.
(199.) The business and aspects of gTLDs was a primary topic of conversation at the 32nd International ICANN Meeting. See Briefing Note, Overall Summary of the Paris Meeting, ICANN PARTICIPATION, http://par.icann.org/briefing-note (last visited on Apr. 8, 2011).
(200.) See supra notes 112-17 and accompanying text.
(201.) FCC vs. Pacifica, 438 U.S. 726, 748 (1978).
(202.) See Marsh, supra note 5, at 386-390.
(203.) See discussion supra Part 1.A.3.
(204.) See discussion supra Part II.C.2.
(205.) First, the Internet has a vast scope, such that those with the most resources do not necessarily have the biggest audience because one person can send a single communication to millions with extreme ease. Second, there is no scarcity of resources on the Internet. Thus, while other forms of communication can oftentimes be cost-prohibitive, the expense of communication through the Internet is essentially zero. Marsh, supra note 5, at 387.
Second, unlike other debate forums--such as classrooms or other public atmospheres--the Internet begets discussions between people who agree with one another. Thus, there is "no real exchange of ideas on www.whitepower.com." S. POVERTY L. CTR., Internet Hate and the Law, 97 INTELLIGENCE REP. (Winter 2000), http://www.splcenter.org/get- informed/intelligence-report/browse-allissues/ 2000/winter/Internet-hate-and-the-law (quoting excerpts from a presentation paper given by Mark Potok, Intelligence Report Editor to U. N. High Commission on Human Rights in Geneva, Switzerland, Feb. 2000).
(206.) This Note was motivated in part by the United Nations' International Telecommunication Union's decision, in September 2008, to follow the Chinese government's proposal to conduct an ongoing construction of technical "IP Traceback" standards that could be used to trace the original source of communications over the Internet. As many civil rights advocates explain, this attempt has the potential to erode international Internet users' ability to engage in the marketplace of ideas while remaining anonymous. It is not only troubling that the Chinese government seeks this mainly in order to stifle "negative articles" posted by the government's political adversaries, but also intriguing that the U.S. National Security Agency is participating in the research process. The United States' involvement in developing more sophisticated technology to identify anonymous Internet posters carries with it the potential for the NSA to share these identities with law enforcement. Thus, it behooves the legal system to develop a plan for how enforcement will be able to use this power. Although the ITU originally estimated that its proposal would be complete in 2009, they have not released a statement saying that it has. See Declan McCullagh, U.N. Agency Eyes Curbs on Internet Anonymity, CNET, Sept. 12, 2008, http://news.cnet.com/8301-13578_3-10040152-38.html.
(207.) See Michael Froomkin, Regulation and Computing and Information Technology. Flood Control on the Information Ocean: Living with Anonymity, Digital Cash, and Distributed Databases, 15 U. PITT. J. L. & COMMERCE 395, 402 (1996).
(208.) See Communications Decency Act of 1996, 47 U.S.C. [section] 230(c)(2) (2011).
(209.) See generally J.R. Suler & W. Phillips, The Bad Boys of Cyberspace: Deviant Behavior in Multimedia Chat Communities, CYBERPSYCHOLOGY AND BEHAV. 1, (Sept. 1997), http://users.rider.edu/~suler/psycyber/badboys.html (describing the rise of indecent Internet speech in anonymous online chat forums).
(211.) Id. at 3.
(213.) See generally Suler, supra note 209.
(214.) See supra text accompaning note 72; see also DEP'T OF HOMELAND SEC., supra note 10, at 2.
(215.) See generally Wolf, supra note 12.
(216.) 475 U.S. at 41, 47 (1986).
(217.) 505 U.S. 377, 389 (1992).
(218.) The proscribed movie theaters were defined as follows: "[a]n enclosed building used for presenting motion picture films, video cassettes, cable television, or any other such visual media, distinguished or characteri[zed] by an emphasis on matter depicting, describing or relating to 'specified sexual activities' or "specified anatomical areas' ... for observation by patrons therein." 475 U.S. at 41, 44.
(220.) Id. at 47-48.
(221.) Id. The Supreme Court has rejected the application of a secondary effects argument on several occasions. See Boos v. Berry, 485 U.S. 312, 322 (1988) (holding that a restriction of speech critical of governments near their embassies was not aimed at the secondary effects of congestion, visual clutter, or protecting security: instead, the restriction was aimed entirely at the content of the speech, namely the messages critical of foreign diplomats); Reno v. ACLU (Reno II), 521 U.S. 844, 867-68 (1997) (finding that the Communications Decency Act (CDA) prohibiting the transmission of obscene or indecent communications by means of telecommunications device to persons under the age of 18 was enacted "to protect children from the primary effects of 'indecent' and 'patently offensive' speech, rather than any 'secondary' effects' of such speech.").
(222.) 529 U.S. 277, 283-86 (2000).
(223.) Id. at 296.
(224.) Id. at 298.
(225.) The Court in R.A.V. also provided guidance for applying a "secondary effects" argument, expanding on Renton: "Listeners' reactions to speech are not the type of 'secondary effects' we referred to in Renton." R.A.V. v. City of St. Paul, 505 U.S. 377, 394-95 (1992) (quoting Boos, 485 U.S. at 321). Specifically, "[t]he emotive impact of speech on its audience is not a 'secondary effect.'" Id.
(227.) Reno v. ACLU (Reno II), 521 U.S. 844, 867-68 (1997)
(229.) On an international scale, "hacktivism" is still such a major threat that the United States National Security Agency spends money to protect citizens against. See supra text accompanying note 72.
(230.) See Talley v. California, 362 U.S. 60 (1960). In Talley, the Supreme Court declared unconstitutional a Los Angeles City ordinance restricting the distribution of handbills that did not display on the cover the name of the person who wrote the handbill or caused it to be written (unless the address of a person writing under a pseudonym is provided). Id. at 60-61. In addition, the Supreme Court, in Buckley v. American Constitutional Law Foundation, 525 U.S. 182, 193-95 (1999), declared unconstitutional a Colorado law that required people who were gathering signatures for a petition to be registered voters and to wear badges identifying the name of the petition-circulator.
(231.) See Dendrite Int'l, Inc. v. Doe No. 3, 775 A.2d 756 (N.J. App. Div. 2001).
(232.) See Talley, 362 U.S. at 60-61.
(233.) McIntyre v. Ohio Elections Comm'n, 514 U.S. 334, 357 (1995).
(234.) See generally Jonathan D. Wallace, Nameless in Cyberspace: Anonymity on the Internet, CATO INSTITUTE BRIEFING PAPER NO. 54, Dec. 8, 1999, http://www.cato.org/pubs/briefs/bp54.pdf. "Cato's Letters" were an abundantly influential and popular set of freedom of speech essays written by two Englishmen--John Trenchard and Thomas Gordon--under the pseudonym "Cato;" they were later published by Benjamin Franklin and quoted by John Adams and Thomas Jefferson. Id. at 2. In addition, The Federalist Papers were published by Alexander Hamilton, John Jay, and James Madison under the pseudonym "Publius," and Thomas Paine's Common Sense was merely signed "An Englishman." Id.
(235.) See supra text accompanying notes 23-28.
(236.) See Dendrite, 775 A.2d at 756.
(237.) See Columbia Ins. Co. v. Seescandy.com, 185 F.R.D. 573, 578 (N.D. Cal. 1999).
(238.) Dendrite, 775 A.2d at 771.
(239.) Id. at 759-62.
(240.) Evan Kubota, Maryland's Highest Court Adopts Dendrite Standard for Unmasking Anonymous Forum Posters in Defamation Actions, HARV. J. L. & TECH. DIGEST (May 6, 2009), http://jolt.law.harvard.edu/ digest/Internet/independent-newspapers-v-brodie.
(242.) See supra Part II.C.1.
Julian Baumrin, Managing Editor of Rutgers Computer and Technology Law Journal, 2010-2011.