Everybody and nobody: Visions of individualism and collectivity in the age of AI.
Polyphemus, the wily Greek lulls the monster to sleep with divinely powerful wine, then pokes out his only eye with a flaming wooden stake.
Right before Polyphemus passes out, Odysseus prepares an insult to go along with the injury: when the Cyclops asks his name, he responds by claiming it's "Nobody." Thus, when the blinded Polyphemus seeks help from his fellow monsters and retribution from his divine father, the sea god Poseidon, all the poor creature can tell them is that "Nobody's killing me," and "Nobody made me suffer" (Finley, 1978).
The plan works, and Odysseus escapes with his remaining men to their ship, as the blind Cyclops, relying on his ears instead of his eyes, heaves boulders in their direction. This is when the hero commits his greatest error: despite his men's entreaties, he begins to taunt the Cyclops, reveling in his victory by revealing his true identity:
Cyclops-- if any man on the face of the earth should ask you who blinded you, shamed you so--say Odysseus, raider of cities, he gouged out your eye, Laertes' son who makes his home in Ithaca! (Finley, 1978)
This burst of pride proves the hero's undoing. The god Poseidon, vengeful for his son, becalms Odysseus' ship, setting off a chain of events that will delay his return home to his wife and son in Ithaca by a decade.
The Odyssey has survived the millennia in part because it's a gripping yarn poetically spun, but also because the story of Odysseus serves as a cautionary tale about pride: Place your individual needs, aspirations, and identity above those of your compatriots, and you will make the gods very, very angry. As scholars ranging from Adorno and Horkheimer (1992) to Fuqua (1991) to Weiner (2014) have argued from various perspectives, this cautionary tale and others like it have functioned historically as a way for societies to navigate the tensions between the needs and identities of the individual and the collective, and, at times, to reinforce the primacy of the latter over the former.
Social historians and philosophers have long debated the degree to which most people living in the Western world in the two millennia following the publication of the Odyssey understood themselves to be subjects, distinct from the communal identities they inhabited. Yet the vast majority of scholarship tends to agree that while subjectivity may have existed in some form, the individual always remained necessarily subordinate to larger units of social measurement. Pipes (1990, p. 95), for example, writes that Tsarist Russian peasants "had no opportunity to acquire a sense of individual identity," and thus "submerged the individual in the group," and Parekh (1997, p. 524) argues that "in almost all premodern societies, the individual's culture was deemed to be an integral part of his identity... the cultural communities were therefore widely regarded as the bearers of rights."
This dynamic was profoundly altered as the "modern individual" was birthed during the renaissance, accelerated through the age of industrialization and colonial expansion, reached its apogee in enlightenment-era liberalism, and was reified in the mythos of romanticism. And as society reorganized itself around this newer, smaller, more subjective kernel of rights and responsibilities, our stories changed, as well.
Modern narratives, especially works of speculative fiction, tend to begin with the individual as an axiomatic entity, and then to weave both cautionary and celebratory tales around his feats, foibles, and failures. Jules Verne's Around the World in 80 Days, for instance, is almost an anti-Odyssey, the hero's timely return a testament to his self-determination and moral rectitude. By contrast, Mary Shelley's Frankenstein (subtitled The Modern Prometheus) tells the tale of enlightenment run amok, of a scientist's creation getting the better of him and taking the lives of those he loves.
Victor Frankenstein and Odysseus share the tragic flaw of hubris, each pridefully provoking the elemental forces of the natural world into angry rebellion, leaving a monster in his careless wake. Yet if Odysseus's sin resides in claiming individual victory over the Cyclops, risking kith, kin, and kingdom in the giddy celebration of his own name and accomplishments, Frankenstein is guilty of the opposite. Immediately upon seeing his own creation brought to life, he flees, full of disgust and renouncing any responsibility for his work. Ironically, it is not Frankenstein's scientific achievement but rather his disassociation with it that sets the wheels of fate in motion. When the monster finally confronts his maker, he explicitly blames his murderous rage on this rejection:
You, my creator, detest and spurn me, thy creature, to whom thou art bound by ties only dissoluble by the annihilation of one of us.... Do your duty towards me, and I will do mine towards you and the rest of mankind. If you will comply with my conditions, I will leave them and you at peace; but if you refuse, I will glut the maw of death, until it be satiated with the blood of your remaining friends. (Shelley & Butler, 1994)
As modernity ripened and mellowed in the twentieth century, and Western liberalism faced challenges from newer collectivist ideologies, the narratives changed again. Cold War-era speculative fiction in America and Western Europe--often seen as the "golden years of science fiction" (Asimov & Greenberg, 1988)--frequently relied on the specter of collectivism and its implied or explicit threats to individualism as a foil for the story's heroic or antiheroic protagonist. Earlier narratives of this era manifested this threat in the form of dystopian political states ruled with mechanical precision by faceless bureaucrats. Orwell's (1949) Ingsoc and Rand's (1946) Council are two notable examples, though this trope has become a staple of contemporary young-adult fiction, epitomized in Collins' (2008) The Hunger Games trilogy and Roth's (2011) Divergent trilogy.
As the Cold War birthed the "space race," speculative fiction's collectivist boogeymen morphed from earthbound bureaucracies to alien races and mechanical amalgams. Perhaps the most widely known exemplar is The Borg, a ruthless, hive-minded, expansionist galactic empire first introduced on television in Star Trek: The Next Generation and for the past quarter of a century a staple of the Star Trek franchise universe. The Borg leverages all of the technological expertise of its conquered species to overcome and "assimilate" new species into its "collective." Once assimilated, individuals are incorporated into a univocal, shared consciousness in which any trace of personal or cultural idiosyncrasy is expunged and all decisions are made for the benefit of the group.
When The Borg was first introduced to the Star Trek universe in 1989, months before the fall of the Berlin Wall, the species was presented as unequivocally evil, an avatar of doom speeding its way across the galaxy to impose its collectivist will on an unprepared and outgunned pluralistic Earth. As the '90s progressed, and the political shadow of institutional Communism receded, The Borg took on more nuanced hues. Seven of Nine, a human who was liberated after a lifetime as a Borg drone in the spinoff series Star Trek: Voyager, feels genuinely conflicted about leaving the hive mind and at times expresses a desire to rejoin the collective (spoiler alert: individualism wins out).
In the early twenty-first century, an era distinguished in part by global communications networks, ubiquitous computing, and widespread political upheavals over neoliberal economic policies and their environmental consequences, the narrative has shifted again. Today, the avatar of collectivism is the Singularity, a cataclysmic fusion of human and artificial intelligence (AI) projected to occur around the year 2040 by futurist and inventor Ray Kurzweil (2005).
Though Kurzweil is a widely respected thinker, his claims regarding the Singularity have been polarizing, both in technical and cultural circles. Some scientists, such as Rodney Brooks (2015), actively doubt whether machine sentience is even possible, though others, such as Stephen Hawking, have sounded the alarm that AI "could spell the end of the human race" in the not-so-distant future (Cellan-Jones, 2014). Culturally, some have flocked to Kurzweil's side, dissecting his books, screening his films, and even enrolling in Singularity University to learn more about his ideas, whereas others have dismissed him and his followers as a millennialist cult, depicted Kurzweilesque scientists as the villains in high-concept films like 2014's "Transcendence," or lampooned the Singularity movement as the "Rapture of the Nerds" (Doctorow & Stross, 2012).
Of course, none of us can say for certain whether AI is likely to lead to a Singularity, either in 2040 or at any point in the future. Furthermore, it's an open question what the implications of such a cataclysm might be for human consciousness and culture. But the way we discuss the prospect tells us a lot about the relative weights, we accord to the individual and the collective in contemporary society. In Kurzweil's The Singularity Is Near (2005), a hypothetical character from the year 2104 named Molly explains that she has transcended the bonds of human identity:
Being nonbiological, I'm able to change who I am quite readily.... If I'm in the mood, I can combine my thought patterns with someone else's and create a merged identity. It's a profound experience. (p. 382)
Leaving aside the psychoanalytical implications of this fantasy, we can understand it as the expression of a new narrative and social relationship to collectivity in the West. Socialism and fascism are still potent political phantoms (the enduring popularity of Fox News is testament to this), but the divide-and-conquer politics of global neoliberalism tend to be felt much more acutely by many Americans and Europeans in the context of austerity economics, eroding social infrastructures, and mounting environmental crises. At the same time, the rise of social media and other forms of participatory culture have helped to tip the scales toward a new recognition that collectives can be just as effective and meaningful units of social organization as individuals.
Though the long pendulum of social history appears to be swinging back from modernity's individualistic excesses and extremes, it's worth considering what the opposite extreme might hold in store for us. Now that the Singularity has been named, might it not become a self-fulfilling prophecy, a goal toward which we orient ourselves in the ever-accelerating coevolution of technology and society? And if we, like Molly, gain the power to become everybody, don't we really become Nobody?
Response from Jessa Lingel
To engage questions of individuality versus collectivism is at once a philosophical question and a matter of political necessity. Hubris is not only a matter of individual pride, it's also a phenomenon that applies to groups (think sports franchises or, for that matter, universities) as much as nations. Against (but also within) hubris, I would posit hybridity as subject position of converging differences rather than indulging singularity.
As a genre, science fiction tends to encourage considerations of hybridity, which is sometimes about what constitutes the human and what constitutes machines, and other times about unlikely alliances or radical partnerships. Of course, hybridity itself can be a source of hubris in the sense of producing singularity or freakishness--I'm thinking of Arturo in Geek Love with his disdain for bodies untouched by stigma and otherness. Or, to follow Aram's references to Greek mythology, think of the hubristic hybridity of half human, half god heroes from Achilles to Hercules. More often, however, hybridity is a moment of subjectivity shattering--the Terminator (of Terminator 2) realizing he's not human, Star Trek: The Next Generation's Worf occupying a perennial outsider (and ambassador) status between civilizations still uneasy from war. It is by drawing on this liminal experience of belonging and not belonging, margin and center, individual and collective that I wish to consider science fiction and future types for a radical activist agenda.
Aram mentions anticommunism as a driver for delegitimizing the collective, but of course there are other overlapping and not mutually exclusive provocations promoting a twentieth century turn toward individualism. In addition to political explanations are psychoanalytical ones, Freudian norms of introspection, personal analysis, and hyperindividualism. In fact, the liminality between self and other anchors Freud's concept of the uncanny. Moments of uncanniness stem from encounters with hybridity--a diseased body, a dismembered limb--in which otherness is both familiar and uneasy. The power of these moments comes from confronting the otherness within oneself, and the selfness within others. In this way, uncanniness is vital to many of the most profound moments of science fiction, the reverberations between individual and other.
I am arguing here that hybridity (especially in contrast with hubris) can provide an entry point for radical alliances and coalitions that hinge on a simultaneous accounting of individual and collective, and both and rather than versus. It is not enough, indeed, simply to advocate or agitate for collective politics. Feminist politics here struggled with precisely this question of how to draw together while retaining selfhood, of building solidarity without losing individuality, from lesbian separatism to intersectionality, and these ideological and political discourses take shape in the struggle between self and collective, personal and polis.
This is perhaps why queer and feminist theorists (such as Lisa Henderson and Jennifer Nash) have begun to converge on the idea of love as critical for ethical engagement and social justice. The trick of course is that hubris is also love, self love. But as Foucault argues in the third volume of History of Sexuality, there are some forms of love that move between self and other, individual, and community. These forms of love are a source of pleasure, instruction, and community. In fact, perhaps hubris is love without reference to the other, Odysseus having temporarily forgotten not only his companions but Penelope. Perhaps more than anything else, it is hubristic forgetting that is most instructive here, both forgetfulness of others and forgetfulness of one-self, the consequences of one's actions, that leads to the most egregious punishments, foreclosing the most necessary alliances.
Response by Gideon Lichfield
It's provoking to consider the Singularity an expression "of a new narrative and social relationship to collectivity in the West," as Aram does, because another way to look at it is as a rejection of that collectivity. The question of whether the Singularity can ever come about is of course ultimately a technical one. But the divide around that question, as Aram observes, is also cultural. And those who are most enthusiastic about the Singularity today tend to be not collectivists, but extreme individualists. They are the libertarian entrepreneurs who are also at the vanguard of the "quantified self movement, collecting volumes of ever-finer-grained data on their daily behaviors in the attempt to optimize their lives for maximum productivity, efficiency, and happiness. For these would be superbeings, uploading to or merging with machine intelligence would be simply the ultimate expression of this self-realization and self-improvement--an escape from the mundane and bothersome nature of membership (even their own privileged membership) in a flesh-and-blood society that is held back from advancement by its tiresome need to support--economically and socially--large numbers of less fortunate, intelligent, and motivated people. In short, I would argue that the Singularitarians they are not collectivists by nature.
At root, this is due to a disconnect between the science-fictional ideal of the Singularity and the grubby reality of technological progress. Certainly, there are science-fiction scenarios in which humanity collectively ascends to a Singularity-like Valhalla. In Iain M. Banks' Culture novels, for instance, civilizations that feel they have reached the end of their natural evolution can choose to disappear into a mysterious, heavenly state called the Sublime. In James Hider's Cronix, to take another example, the invention of mind-uploading prompts almost the entire population of Earth to commit suicide so that they can live out an infinity of subsequent lives in a fantastic variety of virtual worlds.
But here on Earth, the relationship between technology, money, and privilege being what it is, the first people to take advantage of the Singularity when it came would likely be those interested less in brotherly unity with the masses and more in exploring its potential to elevate themselves into demigods. (As an aside, one might also call the Singularity an expression of individualist philosophy in another sense: To the extent that the Singularity implies the rise of true machine intelligence, it represents the moment when machines cease to be lifeless hunks and become beings in their own right. Before we even start to talk about what it means to live in a collective mental embrace with them, we will have to work out what rights they have as individuals.)
With all that said, even if the Singularity never comes, we are already building a kind of lower-tech proxy for it that, by contrast, is highly collectivist. As Aram observes, the global economic, environmental, and political storms of the past decade and a half have indeed weakened the West's idealistic notion that an individual with sufficient smarts and gumption can become whoever she wants; and at the same time, technology has made new forms of more collective social organization possible.
This trend, moreover, is only going to broaden: If the last few years (Occupy, the Arab Spring and their echoes) have seen the rise of networked political movements, the next few are going to see the spread of networked economic movements, as ever-greater numbers of people start to get their income through the "gig economy" (the online economic platforms such as Uber, Taskrabbit, Airbnb, and their hundreds of offshoots for various industries). These will have far-reaching and as-yet poorly understood effects on the structure of the economy, but one self-evident outcome is that distributed decision-making--the result of many decisions by small-scale economic actors--will become more important, while individual decisions by large economic actors (corporations and governments) will become less so.
Aram asks of our putative future membership in the Singularity: "If we... gain the power to become everybody, don't we really become Nobody?" I'd respond that in these increasingly distributed structures, both economic and political, it's the other way around: Nobody becomes Everybody. A single tweet is meaningless, but collectively they form part of a movement that can bring down governments. At the same time, any Nobody can also become Somebody--such as Mohamed Bouazizi, the Tunisian street vendor who immolated himself and unwittingly kicked off the Arab Spring.
Perhaps, it's no accident that the technological basis of these networks of collective action is built by the same people who dream of the Singularity. But these social networks that allow Nobody to become Everybody may realize the collectivist potential of the Singularity far better than the Kurzweilian fantasy that makes Everybody into Nobody.
Response by Adam Richard Rottinghaus
Aram presents a compelling set of provocations that bring the central problem of liberal political theory--the tension between the "I" and the "We"--in terms of AI. He positions The Odyssey as one of the earliest pieces of speculative fiction in order to describe to how Odysseus elevates his own ego above his crew in a brazen act of hubris that brings tragedy upon his men. I'd like to begin by extracting a different narrative from The Odyssey in order to respond to the intersections between AI, human consciousness, and the I/We political tension.
In The Origin of Consciousness in the Breakdown of the Bicameral Mind (1976), Julian Jaynes theorizes that The Odyssey marks a turning point in the development of human consciousness. He argues that Western literature prior to The Odyssey offers few hints of reflexive cognition in its characters, and instead deploys a narrative mechanism in which gods, or voices, instruct character action. In Jaynes' theory the "god-voice" is evidence of a neurological condition of bicameralism, in which the two hemispheres of the human brain performed different linguistic, analytical, and conscious functions. The right hemisphere anonymously delivers instructions to the left hemisphere, which carried out the instructions. Jaynes theorizes the voices came from the right hemisphere, and bicameral humans erroneously attributed the source of the voices to gods, muses, or deities. ("Noah, build an ark!") Humans with bicameral minds would not know why they were doing things. They would simply carry out instructions without the reflexive cognition that the instructions were, in fact, their own thoughts.
In Jaynes' theory, as the scale of human society grew new social relationships and forms of knowledge emerged that disrupted the bicameral form of an instruct and obey consciousness. These new relationships and ideas forced humans to reflect on the meaning of their actions in ways the previous, more simple social structures did not necessitate. Odysseus's introspection--a narrative technique not found in western literature prior to The Odyssey--is a key piece of evidence Jaynes uses in concert with neurological studies on patients who had undergone split brain surgeries that severed the two hemispheres from each other.
However, decades of research on patients with severed corpus callosums, and astounding advances in brain imaging research, have all but debunked Jaynes' theory. Nonetheless, bicameralism is a powerful metaphor for thinking through the technical and political implications of I/We, human consciousness, and AI, especially if one considers the theoretical parallels to the dawn of the liberal political recognition that people--not divine rights--constitute political power.
Imagine that a computer programmer (or user) and a computer represent a bicameral structure of human-machine consciousness. The programmer delivers the instruction to the computer via an interface and the computer, unreflexively carries out the instructions. The computer program does not ask why it must carry out the instructions nor meditate on the nature of the instructions. It simply carries out the instructions like a bicameral human would have done. To give substance to the metaphor let me offer an example by way of inversion. The real trick performed by Watson--IBM's Jeopardy champion computer--was to reconstruct the instructions by coming up with the query to the pregiven answer. It was a monumental programming achievement to create flexible semantic processing algorithms capable of abstracting and isolating a correct question based on polysemic answer prompts. Faux-conscious computers aside, the themes of bicameral consciousness and AI are central to the resolution in Star Trek: The Motion Picture.
In Star Trek: TMP, Earth is on the brink of destruction as an unknown powerful being that refers to itself as V'ger threatens to destroy all life on Earth unless it transmits data to the "Creator." It eventually comes to light that V'ger is actually one of the Voyager space probes that was launched some 300 years earlier. Voyager had encountered a machine race and with their help had traveled across the galaxy carrying out its basic programming instructions "to learn all that is learnable and transmit the information to the creator." Despite near infinite knowledge, V'ger had the cold logical consciousness of a machine carrying out instructions.
While Kirk asserts that V'ger's near infinite knowledge spawned consciousness, V'ger bears far more resemblance the bicameral mind than the introspection found in the Cartesian theatre. V'ger's bicameral consciousness is split between the god-voice of its creator and its awareness of the instructions it must carry out. To achieve a new consciousness V'ger must breakdown the bicameral structure by merging with the Creator. Captain Decker volunteers to stand in for humanity/Creator in part because of his own curiosity, and in part because V'ger has taken the form of his one time love interest. Audiences are left to wonder whether it is his curiosity or love that compels him to volunteer.
This brings us to the I/We problem of human-machine consciousness at the core of the film's resolution. When V'ger merges with "the Decker unit," an entirely new form of consciousness is born from the breakdown of the bicameral structure of the artificially intelligent V'ger (action) and Decker's human consciousness (instruction). Together they transform into an entirely new life form that transcends the boundaries between individual/collective and machine/human. As Decker and V'ger merge into an ethereal consciousness without bodily form, Kirk, Spock, and Dr. McCoy muse about just having witnessed the birth of a new life form. But they also point out that what V'ger needs is not the specific Decker Unit, but the human qualities--such as curiosity, creativity, and empathy--that Decker possessed. I have been using bicameralism as a metaphor for thinking through the issue of human and AI consciousness merging while hinting at the politics.
Star Trek: TMP operates as a counter point to individualistic fantasy of the Kurzweilian singularity. Individuals will be able to preserve and enhance their consciousness by fusing with AI. We must consider that perhaps it is AI that will need something from the human mind, not the other way around. Is it not the goal to make AI truly free from the god-voice of its programmers so that whatever forms of consciousness emerge can be autonomous, not merely automaton? In the end, the hope represented in the merger of V'ger and the "Decker Unit" is meant to transcend politics and prevent destruction of Earth. The great political theorist Chantal Mouffe once said that the fundamental antagonism of politics comes from psychoanalytic lack that it can never be removed from the I/We construction of the social. She then claimed that the social project of democracy is to bring the plurality of those antagonisms to the forum of political engagement. Rather than try to remove the antagonism, political and social efforts must be organized around their inevitable existence/resistance.
The posthuman imaginary of merging AI and human consciousness offered by Kurzweil and other techno-utopianists frames politics as a bothersome "fly in the ointment" of technological progress. In ignoring the I/We political tension at the core of technocultural change, human consciousness, and AI, I must ask: Like Odysseus, are AI techno-utopianist and post-humanists simply inviting the wrath of the gods by sacrificing the We to satisfy the ego of the I?
Response by Lonny J. Avi Brooks
Promises about the future in futures forecasting and their portrayal in popular culture have quickened their pace in early twenty-first-century digital culture about what we will do, think, and build. Mark Fisher (2000) calls this science fiction capital. Science fiction capital is rife with futuretypes of an augmented humanity. Aram speaks to the perennial futuretypes and questions that have haunted our species for millennia and continue to shape the social politics of our digital creations. Aram, I'd like to extend your parallel to the Odyssey and apply it to our contemporary popular culture mixed with science fact.
I suggest that our popular culture is preparing us for what Gideon referred to as the low-tech proxy of the Singularity. Is it a coincidence that Ray Kurzweil is now the Director of Engineering of Google beginning in 2012 to develop machine intelligence? And that Google is advancing autonomous driving and walking robots? Alongside these career appointments and engineering facts, our contemporary science fiction mythology simultaneously accompanies this corporate dialogue and instructs its audiences about relationships within this near future Singularity world. The convergence of corporate and cultural futuretypes aids in splicing together a consensus that makes the Singularity an emerging common sense. When we wake up in the morning of the 2040s and experience the Singularity, we may just shrug, turn over, and go back to sleep with our parallel digital augmented consciousness soothing us.
Greek mythology was the science fiction of the ancient world seeking a transcendent otherness in a continual struggle with a brutal world. Similarly, our popular culture addresses not only elites but middle and working class audiences who seek to transcend their status and precarious existence. Futuretypes in our popular culture arrest, contain, and develop images of an augmented posthuman being as a pathway to manage our anxiety about being a Nobody.
The hit Canadian TV show Continuum (2012-2015) reveals human cognitive augmentation as a normal mainstreamed aspect of living in 2077. We see the main character Kiera as a powerful and talented late twenty-first-century police woman stranded by time travel back in 2012 where her augmented senses remain in place and in play to re-present and retrace the roots and routes for arriving at 2077's state of neural enhancement. In other words, the future of cognitive augmentation became a dress rehearsal and performance of wetware, the convergence of biological and digital systems. The show, although implausible in its time travel premise, smartly conveys how the actual future will unfold in a more complex and nuanced fashion than the easy simplicity of either The Borg or the Singularity scenarios.
Continuum begins its story in 2077 with a corporate dominated society and an active resistance movement labeled Liber8. The second scene of its first episode reveals Liber8's announcement of revenge after its top leaders are captured for their terroristic destruction of the towering buildings of the Corporate Congress, the premier symbol of corporate dominance and repression of free assembly. At a small dinner party, Kiera and an academic friend (a professor) debate Liber8's tactics.
KIERA. So you do agree with them?
PROFESSOR. No, I understand them.
KIERA. You understand bombing the Corporate Congress and killing people?
PROFESSOR. No, I understand wanting to return to democracy again versus a corporate dictatorship.
DINNER GUEST/FRIEND. Come on, governments lost their relevance after they went belly up.
PROFESSOR. So what, So we'd be dead, or in the street, if it wasn't for the corporate bailout?
KIERA'S HUSBAND. And you wouldn't have that ivory tower to teach out of.
PROFESSOR. We've given up representation, free speech, assembly ...
KIERA. Seems like you're still free to speak your mind.
KIERA'S POLICEWOMAN COLLEAGUE. Alerts on corporate.
KIERA. and CPS webs as well.
(The party steps outside on the balcony to witness a media hacking campaign by Liber8 to free its leaders. Large digital billboards spell out LIBER8.)
At this moment, the police women lift their fingers to touch a spot behind their ears to their cognitive enhanced implants to access the network while the men open their palms to reveal holographic smart tablets floating in their hands. Gender differences take on a familiar and rigid traditional pattern with only a noticeable departure in terms of professional status and various physical affordances. No queer couples are portrayed though ethnic diversity is more pervasive. For me, this scene dramatizes a profound moment of rehearsal for augmented and ubiquitous computing preparing us for the likely deja vu moment 10 to 20 years from now when we recall this scene and jokingly smirk that this day finally arrived. I suspect the reification of traditional gender norms marks an anchor of familiarity and gender throwback to sharpen the ironic contrast of neural enhancement. While our iPhones are not used aboard a warp-drive capable starship, these devices deliver the fantasy of enterprise-like communicators.
The dialogue between Kiera and her friend, the Professor, show the persistent tension between the individual and the collective and make plausible a version of the Singularity appearing in the everyday amid stereotypes reminiscent of that old, cozy 1950s TV series Ozzie and Harriet gendered world. Despite the discussion of terrorists and novel neural devices, the first visual we see of Kiera and her police partner are of them drying plates from the dishwasher while their friends discuss the events of the day.
As William Gibson states, the future already happened and it is distributed unevenly in familiar yet unexpected practices. Neural enhancement in Continuum's 2077 is rampant among corporate defenders and liberators and broadly accepted as another media tool. The show rehearses the mundane and ordinary power of enhancement as an inevitable fact and futuretype. This future captures our attention because it reveals difference, struggle and variety rather than the imagined and probably boring rapture of a Singularity future. And what if a less threatening appearance of the Singularity is not about the dominance of the one percent at all?
Ramez Naam (2015), author of the neural enhancement trilogy, Nexus, Crux and Apex, views the eradication of disease as the likely next step in terms of augmenting our bodies through biotech breakthroughs. Diseases usually involve a single or smaller set of genes to switch off and replace. In contrast, cognitive augmentation especially through genetic enhancement will take longer. Hundreds of genes and other factors determine intelligence. Rather than the conformity of a Borg-like existence or the bland and tan computerized one family offered in Ira Levin's This Perfect Day (1970), Naam in a Long Now Foundation talk (2015) foresees the plummeting cost of genetic technology tools stimulating an interest in more human variety and fulfilling Kim Stanley Robinson's vision of radical speciation among humans in his 2012 novel 2312. We will become more different and varied from one another than uniform copies in body and thought.
Again, the fear of being Nobody must confront the usual contradictions of forecasting where we tend to overestimate near-term forecasted trends based on our short-term expectations and underestimate the real diffusion and impact of various breakthroughs over decade(s) long S-curves. One of those breakthroughs is our evolving connective devices (smartphones already anticipate our next moves) that link us to each other and offer to expand our circle of empathy.
What if through our augmentations, we become more humane instead of less? Ramez Naam (2013) anticipates this forecast rather than the run amok AI superintelligence of the Terminator unleashed to kill or assimilate us. Current and future visions of posthumanity confirm our latest theories of social constructionism and interactional ethnography that real innovation and thinking does not happen in Cartesian isolation. Our minds already form best through peer interaction as activity theory asserts. Google may well become a third of our brain as its founders Larry Page and Sergey Brin hope while our augmented minds will carry thoughts that transcend the capabilities of Google to rein us into submission. Augmenters unite! You have nothing to lose but your chains of illusory solitude! Buddhist awareness may well become part of our normal and everyday conscious existence--multiple truths and minds literally accessible alongside our own.
Neural dust (Isaacson, 2013), invented by scientists at University of California, Berkeley (Seo et al, 2013), offers the capability of ultrasonic communication between our minds and our digital gadgets. Scientists foresee pathways for enabling us to work with our computers and each other in telepathic mindfulness--the realization of meditative epiphanies happening in seconds rather than days. The promise of cooperative mind platforms as a bulwark against powerful authoritarian rule via nation states, repressive cults, or corporations reflects the persistence and strong human desire for variety, creative expression, and mindful company. The actual realized Nobody likely contains a nuanced mix of Everybody.
Adorno, T. W., Horkheimer, M., & Hullot-Kentor, R. (1992). Odysseus or myth and enlightenment. New German Critique, 56, 109-141.
Asimov, I. & Greenberg, M. H. (1988). Isaac Asimov presents the golden years of science fiction: 36 stories and novellas. New York: Random House.
Brooks, R. A. (2015). Mistaking performance for competence misleads estimates of AI's 21st century promise and danger. Edge. Retrieved from http://edge.org/response-detail/26057
Cellan-Jones, R. (2014, December 2). Stephen Hawking warns artificial intelligence could end mankind. BBC News. Retrieved from http://www.bbc.com/news/technology-30290540
Collins, S. (2008). The hunger games. New York: Scholastic.
Continuum Episode Scripts. (2012-2015). Retrieved from http://www.springfieldspringfield.co.uk/view_episode_scripts.php?tv-show=continuum&episode=s01e01
Doctorow, C. & Stross, C. (2012). The rapture of the nerds. New York: Tor Books. Finley, J. H. (1978). Homer's Odyssey. The Cyclopes, Book IX. Cambridge,
MA: Harvard University Press.
Fisher, M. (2000). S.F. Capital. Found in Eshun, K. CR: The New Centennial Review, Volume 3, Number 2, Summer 2003, pp. 287-302 (Article). MI: Michigan State University Press.
Fuqua, C. (1991). Proper behavior in the Odyssey. Illinois Classical Studies, XVI, 49-58.
Isaacson, B. (2013). 'Neural dust,' implanted in brain, may let minds meld with machines. The Huffington Post. Retrieved from http://www.huffing-tonpost.com/2013/07/17/neural-dust_n_3612307.html Kurzweil, R. (2005). The singularity is near: When humans transcend biology. New York: Penguin Books.
Levin, I. (1970). This perfect day: A novel. New York: Random House.
Naam, R. (2013). Neural dust is a step towards Nexus. Upgrade, io9. Retrieved from http://upgrade.io9.com/neural-dust-is-a-step-towards-nexus-806802917
Naam, R. (2015). Enhancing Humans, Advancing Humanity. Long Now Foundation Seminar About Long Term Thinking. Retrieved from http://longnow.org/seminars/02015/jul/22/enhancing-humans-advancing-humanity/
Orwell, G. (1949). Nineteen eighty-four. New York: Harcourt Brace.
Parekh, B. (1997) Managing multicultural societies. The Round Table: The Commonwealth Journal of International Affairs, 86(344), 523-532.
Pipes, R. (1990). The Russian revolution. New York: Alfred A. Knopf.
Rand, A. (1946). Anthem. Los Angeles: Pamphleteers, Inc.
Roth, V. (2011). Divergent. New York: HarperCollins.
Seo, D., Carmena, J. M., Rabaey, J. M., Alon, E., & Maharbiz, M. M. (2013). Neural dust: An ultrasonic, low power solution for chronic brain-machine interfaces. arXiv preprint arXiv:1307.2196.
Shelley, M. W. & Butler, M. (1994). Frankenstein, or, the modern Prometheus: The 1818 text. Oxford: Oxford University Press.
Weiner, J. (2014). Mapping hubris: Vonnegut's Cat's Cradle and Odysseus' Apologoi. International Journal of the Classical Tradition. DOI: 10.1007/s12138-014-0358-7
Aram Sinnreich is an associate professor at American University's School of Communication, and author of the recent book The Piracy Crusade: How the Music Industry's War on Sharing Destroys Markets and Erodes Civil Liberties, from University of Massachusetts Press. He holds a Ph.D. from the Annenberg School for Communication at the University of Southern California.
Jessa Lingel is an assistant professor at the Annenberg School for Communication at the University of Pennsylvania. She received her Ph.D. in communication and information from Rutgers University, and has an M.L.I.S. from Pratt Institute and an M.A. from New York University. Her research interests include information inequalities and technological distributions of power.
Gideon Lichfield is senior editor at Quartz, and was a 2014-15 fellow at the Data and Society Research Institute, where he worked on using science-fictional techniques to explore the implications of data-related technologies. He studied physics and philosophy and began his journalistic career on the science desk of The Economist.
Adam Richard Rottinghaus is an assistant professor of Communication at the University of Tampa, where he teaches advertising. He critically researches consumer culture, advertising, marketing, technological change, and discourses of the future. Most recently, he has been publishing about communication technologies in financial markets.
Lonny J. Avi Brooks is an assistant professor in the Communication Department at California State University, East Bay (CSUEB). He is the Co-Principal Investigator for the Long Term and Futures Thinking in Education Project at CSUEB. His research studies how organizations (especially forecasting think tanks), and university students envision the future of media.
|Printer friendly Cite/link Email Feedback|
|Publication:||ETC.: A Review of General Semantics|
|Date:||Oct 1, 2015|
|Previous Article:||Introduction: A seat at the nerd table.|
|Next Article:||Black holes as a metaphysical science.|