Printer Friendly

Six key areas of investment for the science of cyber security.

Cybersecurity is perhaps the most difficult intellectual profession on the planet. The technical basis for that which needs security changes rapidly; and we have sentient opponents. We have no real ability to perform controlled experiments, yet uncontrolled natural experiments are all around us all the time even though data quality from those natural experiments is a constantly confounding issue.

There are fields where it seems as if scientific progress has a certain cadence, a certain predictability, a pace of progress akin to lava overspreading a coastal plain, a largely stable forward velocity. That does not seem to be the case with cybersecurity, where breakthroughs occur with an event-rapidity that I just don't see elsewhere. Maybe I'm just not looking hard enough, but we seem to be so much closer to the work factor of a fire department than to the work factor of an accounting firm, to pick two examples approximately at random.

One of the questions we have yet to answer is whether vulnerabilities are sparse or dense; if and only if vulnerabilities are sparse does it actually make sense to allocate the effort to find them or reward those who do. If vulnerabilities are dense, then treasure should not go to finding them but to making systems resilient to them. I ask about this in varied settings; I get strong--and I mean strong--opinions over the full range of dense to sparse. Smart, knowledgeable people say "too dense to measure" while other smart, knowledgeable people say "too sparse to measure." That's not a trick question. It is a steering question like no other.

In the meantime, it seems to me that we are near a fork in the road, a fork where one road is that of generating provably defect-free code followed by long term, brutally rigorous change control while the other road is that of moving target defense, rapid release, DevOps, et cetera. These alternatives seem both antithetical yet promising and both are fed by real scientists making real progress, but after having chosen one road switching to the other at some later time seems likely to be infeasible. As with the "availability calculus," do we maximize the mean time between failures or do we minimize the mean time to repair? We cannot do both nor, therefore, should we try.

The question of whether cybersecurity is yet a science is a hard one. I am sorely tempted to answer the question "Is cybersecurity a science?" with "Getting closer, but not yet"--to say, in other words, that we are in the pre-paradigmatic stage with a variety of schools of thought. We then first ask about candidate paradigms of cybersecurity. If they exist and have turned over from time to time, then my answer would be simply wrong and cybersecurity may already be a science. But let me repeat the one thing that may make cybersecurity different from all else we have sentient opponents. The physicist does not. The chemist does not. Not even the economist has sentient opponents. We do. What puzzles we have to solve are not drawn from some generally diminishing store of unsolved puzzles, nor could our theories completely explain all observable fact thus reducing our worries and our work to engineering alone. There is something different about a search for truth when there isn't any, or at least any that lasts long enough to exhaustively explore.

Science tends to take us places where policy cannot follow. Policy tends to take us places where science cannot follow. Yet neither science nor policy can be unmindful of the other. Both science and policy heavily influence, if not control, the possible futures we might find ourselves inhabiting. It is clear that policy is having ever-greater difficulty in keeping up with science, yet science without policy limits is inevitably dystopian.

In past months, very well-informed individuals have warned about advances in the fields of both artificial intelligence and genetic engineering as being likely to introduce irreversible, unintended effects that are permanently incompatible with fundamental values. I side with the above in both cases, as I believe advances in cybersecurity are likewise introducing irreversible and unintended effects that are permanently incompatible with fundamental values.

So, if we are to help science and policy help each other, what are we to do? It may be that making predictions is the core value. In that vein, I will confine myself to six points where I see science, including applied science, asking us to look ahead.

Identity

Miniaturization will continue its long-running progression and as a consequence, devices will continue to proliferate into spaces in which they were never before present. Burgeoning proliferation demands device autonomy, and will get it. For autonomy to not itself be a source of irredeemable failure modes, devices will have individual identities and some degree of decision-making capacity.

As device counts grow, device identity eclipses (human) user identity because user identity can be derived from device identity insofar as the proliferation of devices means that users are each and severally surrounded by multiple devices, devices whose identity is baked into their individual hardware, as is already the case in mobile telephony.

There is then neither need nor process to assert "My name is Dan" as Dan's several devices will collectively confirm that this is Dan, perhaps in consultation with each other. As per Zuboff's Laws (1), all devices are therefore sensors and as the devices themselves have immutable device identities, Dan's claim to being Dan is decided algorithmically. And distally.

Cryptographic keys for users thus become irrelevant, as devices will have them, thereby freeing users from key management much less password drills. The Fifth Amendment is entirely mooted as Courts have already ruled that only something you know is protected thereunder, not something you are or have, that is to say that production of devices under subpoena cannot be thwarted.

The longstanding debate over whether identity should be name-centric (where "Dan" is the identity and some key is an attribute of "Dan") or key-centric (where the key is the identity and "Dan" is an attribute of that key) is thus decided in favor of key-centricity though the keys are now held in a fog of small devices. This setting mimics how a stratum of elite people carry neither identification nor money--in the context of their retinue there is no need for such.

For the result of this data fusion to not be a unitary identity for the individual user, policy will have to demarcate data fusion with a vigor it has never before dared.

Ownership as perimeter

The paradigm of cybersecurity has long been perimeter control, but that same proliferation of devices rewrites the calculus of what is a perimeter. It is clear that the design of the Internet as we now know it rests on two principles above all others: preferential attachment and end-to-end communication protection. Preferential attachment yields scale-free network growth that, in turn, maximizes network resistance to random faults; Internet build-out could not have happened otherwise. The end-to-end principle is and has been the fuel for innovation as end-to-end scales whereas permission brokering does not.

Both of those principles are under stress. First, the S-curve of name-addressable Internet growth passed its inflection point in November of 2008, and since that time growth rates have slowed. Second, random faults no longer comprise the availability risk they once did, all the while carriers and governments alike clearly want non-preferential attachment, carriers in their desire for economic hegemony, free-world governments in their desire for attribution, and unfree-world governments in their desire to manipulate information flow.

Add in the proliferation of small devices and the paradigm of cybersecurity can no longer be perimeter control. For example, let's count cores in the Qualcomm Snapdragon 801. The central CPU is 4 cores, the Adreno 330 GPU another 4, Video Out is 1 more, the Hexagon QDSP is 3, the Modem is at least 2 and most likely 4, Bluetooth is another 1 as is the USB controller and the GPS. The Wi-Fi is at least 1 and most likely 2, and none of this includes charging, power, or display. That makes somewhere between 18 and 21 cores. In the vocabulary of the Internet of Things, I ask you whether that is one thing or the better part of two dozen things? It is pretty certain that each of those cores can reach the others, so is the perimeter to be defended the physical artifact in the user's pocket or is it the execution space of those cores, each and severally?

I looked at seven different estimates of the growth of the Internet of Things as a market phenomenon--everything from smart electric meters to networked light bulbs to luxury automobiles--and the median is a compound annual growth rate of 35%. If perimeter control is to remain the paradigm of cybersecurity, then the number of perimeters to defend in the Internet of Things is doubling approximately every 24 months.

So what is to be the perimeter of control from a cybersecurity point of view? Is it ownership that demarcates perimeter? More and more of user capability is controlled by licensure, not ownership in the dictionary sense of the word "ownership." The science is taking us away from ownership conferring cradle-to-grave control towards a spectrum of temporally constrained permission granting; I can buy a $200,000 John Deere tractor, but not until I accept a software license agreement. I can give you my bed, but I cannot give you my iTunes. Self-driving cars are perhaps as good an illustration as any; over-the-air auto-update of firmware will not be optional in either time or place and vehicle-to-vehicle communication will do route selection in the name of the common good. In the digital world, nothing comes without strings attached.

Control diffusion

As has been shown in finance, if one entity can do high speed trading then all must, but whereas predatory and/or unstable trading is subject to regulatory control, cyber predation is not, and cyber predators have zero legacy drag. As such, turning over our protections to machines is inevitable. Science and startups alike are delivering a welter of automation for protection, most not involving recondite algorithms but rather big-data fueled learning about what is normal, the better to identify that which is not and thus suspect.

I leave to any policy discussion the question of whether the speeds at which cybersecurity automation must run will even allow occasional interruption to ask some human operator for permissions to act, or must cyber 'kill decisions' be automated on the argument that only when so automated can they respond in time? If the latter holds, and I am certain that it will, science will be under the gun to encode human ethics into algorithms that will thereafter free run. Put differently, I predict that it is in cybersecurity, per se, where the argument over artificial intelligence will find its foremost concretization. As an example of an unevaluable vignette, the self-driving car will choose between killing its solo passenger or fifteen people on the sidewalk. Many are the examples of airplane pilots sacrificing themselves to avoid crash landing in populated zones. Would you willingly ride in an altruistic vehicle?

Coupled with algorithmic user identification, control will enter a state where trust is multi-way, not one-to-one. It is hard to overestimate just how much the client has become the server's server. Take JavaScript, which is to say server-side demands that clients run programs as a condition of use, or web screens recursively assembled from unidentifiable third parties; the HTTP Archive says that the average web page now makes out-references to 16 different domains as well as making 17 JavaScript requests per page, and the JavaScript byte count is five times the HTML byte count. A lot of that JavaScript is about analytics, which is to say, surveillance of the user.

But as a practical matter, any important control (such as for medical emergencies) needs an override. Barring national security situations, such override is closer to a failure that must not be silent. That is to say that if the pinnacle goal of security engineering is "no silent failure," then the as yet unmet challenge is how to design cybersecurity such that it never fails silently. There is scientific work to be done here--full automation of cybersecurity maximizes the downside cost of falsely positive indicators of attack.

Communications provenance

Provenance of network traffic will rise to new importance unrelated to quality of service or transport neutrality.

Executives delegating correspondence handling to their assistants have heretofore driven delegation of credentials; as devices proliferate, delegation of credentials and authority becomes a necessity across the board, at least for First World digerati. Take loading a web page in a browser: the browser does proxying, nameservice lookup, etc., and eventually loads that page plus subsequent web page dependencies, probably from other sites. In other words, there are various levels of "who" actually requested what, such as what piece of JavaScript invoked Google Analytics. As a one-off experiment, I looked at the topmost page of cnn.com; there I found 612 HREFs across 38 hosts in 20 domains even without evaluating the 30-odd JavaScript's there. Competent scientists are studying the issue of how to characterize multi-dimensional attack surfaces, and we should attend their results.

Because cybersecurity is to remain the driving reason for egress filtering, provenance--as in "Who ordered this page?"--is the crucial variable for intelligent flow control. If cyber integrity of the browser platform itself is to remain the topmost user goal, then agency--again as in "Who ordered this page?"--is likewise the most important variable for permission decisions.

This need will be met with traffic analysis extending into the execution environment. When the general public came to need encryption, the commercial sector caught up to the military sector in the application of cryptography within a decade. Now the marketers are driving the commercial sector to catch up to the military sector in traffic analysis. How the traffic analysis that marketers demand (and will get) meshes with the traffic analysis of end-users delegating human authority to their growing constellation of devices remains to be seen, but with dual demand for traffic analysis, the commercial sector will fill that demand one way or another.

But even if the public and the marketers want some kind of traffic analysis that is of a toy variety compared to what the military sector needs, there are two other considerations at play. One is that a nonnegligible fraction of Internet backbone traffic cannot be identified by protocol, i.e., it has no provenance and is likely peer-to-peer. While intentionally obscure traffic may as easily be pedophiles as heroic freedom fighters, in a world where machines provide cybersecurity by learning what is normal so as to tag what is abnormal, the pedophiles and the freedom fighters will stand equal chances of being blocked, if not outed.

The other consideration is junk traffic, meaning traffic whose emitter is on autopilot but whose purpose is long defunct. Years ago, my colleagues spent some time trying to figure out what was calling one of our dialup numbers. In the end, it turned out to be an oil tank in an abandoned building that was outfitted to request a fill when needed, and we had inherited the number to which such requests had once gone.

Junk traffic will have to be dealt with via provenance or some discoverable correlate of provenance. Perhaps we will remanufacture spam detection for this purpose. Perhaps traceability will become the rule of law as soon as geolocation applies to the Internet as much as it now applies to cell phone triangulation.

Everything is unique

Science is rapidly teaching us that everything is unique if examined at close enough detail. Facial recognition is possible at 500 meters, iris recognition is possible at 50 meters, and heartbeat recognition is possible at 5 meters. Your dog can identify you by smell; so, too, can an electronic dog's nose. Your cell phone's accelerometer is plenty sensitive enough to identify you by gait analysis. A photograph can be matched to the camera from which it came as well as a bullet can be matched to the barrel of the gun through which it passed. Some apartment building owners now require that tenants provide a DNA sample of their dog so that unscooped poop can be penalized.

When everything is detectably unique, decision support of many sorts becomes possible. Assessing nuances (such as whether you are angry) will be embedded in automatons. Accountability will doubtless be extended to ever more minor behaviors. That heartbeat recognition technology is already slated to be part of automobiles. Courtroom alibis will soon be backed by cybersecurity-like evidence, noting that because an alibi involves evidence of innocence rather than of guilt, the privilege against self-incrimination is not implicated and is, instead, subject to compelled disclosure. The testimony of spouses against each other will be unnecessary--their devices will do.

Opaqueness is forever

Where data science spreads, a massive increase in tailorability to conditions follows. Even if Moore's Law remains forever valid, there will never be enough computing, hence data driven algorithms must favor efficiency above all else. Yet, the more efficient the algorithm the less interrogatable it is; that is to say, the more optimized the algorithm is, the harder it is to know what the algorithm is really doing.

The more desirable some particular automation is judged to be, the more data it is given. The more data it is given, the more its data utilization efficiency matters. The more its data utilization efficiency matters, the more its algorithms will evolve to opaque operation. Above some threshold of dependence on such an algorithm in practice, there can be no going back. As such, if science wishes to be useful, preserving algorithm interrogatability despite efficiency seeking, self-driven evolution is the research-grade problem now on the table. If science does not pick this up, then Larry Lessig's characterization of code as law is fulfilled, and permanently so.

Implications: Why this matters

There is no argument whatsoever that the proliferation of devices and information are empowering. Technology is today far more democratically available than it was yesterday and less than it will be tomorrow: 3D printing, the whole "maker" community, DIY biology, micro-drones, search, home automation, constant contact with whomever you choose to be in constant contact with, instrumentation of every stripe and caliber, the steady migration of military technology to general government use thence to the rich thence to the lumpenproletariat--these are all examples of democratizing technology. This is perhaps our last fundamental tradeoff before the Singularity occurs: Do we, as a society, want the comfort and convenience of increasingly technologic, invisible digital integration enough to pay for those benefits with the liberties that must be given up to be protected from the downsides of that integration? If, as the late Peter Bernstein said, risk is that more things can happen than will, then what is the ratio of things that can now happen that are good to things that can now happen that are bad? Is the good fraction growing faster than the bad fraction or the other way around? Is there a threshold of interdependence beyond which good or bad overwhelmingly dominate?

Now that we need cybersecurity protections to the degree that we do, to whom does the responsibility devolve? If the worst laws are those that are unenforceable, what would we hope our lawmakers say about technologies that are not yet critical but soon could be? Do we forbid becoming critically dependent on them when it is the sheer magnitude of adoption that makes them critically essential?

The need for what we have heretofore called cybersecurity is now so varied that it is no longer a single field but many. There are perhaps 1000 cybersecurity startups in some stage of the funding game, a fair fraction of them spinouts from highly focused university research projects. Generalists such as myself cannot be replaced-there is too much for the novitiate to learn. The core knowledge base has reached the point where new recruits can no longer hope to someday become competent generalists, serial specialization is the only broad option available to them.

As I said earlier, cybersecurity is perhaps the most difficult intellectual profession on the planet. Ray Kurzweil is beyond all doubt correct; within the career lifetime of nearly everyone in this room, algorithms will be smarter than we are, and they will therefore be called upon to do what we cannot--to protect us from other algorithms, and to ask no permission in so doing. Do we, like Ulysses, lash ourselves to the mast or do we, as the some would say, relax and enjoy the inevitable? What would we have science do? What are the possible futures you will tolerate? What horses do you want not let out of the barn?

Editor's note:

When I heard Dan Geer's talk at the recent 'Suits and Spooks' conference in New York City, I was spellbound. His unique speaking style combined with erudite thinking on many cybersecurity topics made for a fascinating learning experience.

I wanted to publish his ideas without losing the magic of the in-person experience. While this is truly one of those "you had to be there" situations, this humble magazine is doing its best to deliver it to you. Here is Dan's talk in (nearly) unedited form.--MD

This article is an edited version of remarks given at the Suits and Spooks conference (suitsandspooks.com), June 19, 2015 in New York, NY.
COPYRIGHT 2015 World Future Society
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2015 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Geer, Dan
Publication:The Futurist
Date:Jun 22, 2015
Words:3586
Previous Article:Gina Bianchini: the serial entrepreneur, investor, and 'closeted futurist' talks about the global competition startups face, dating apps, military...
Next Article:Conversations with Newt Gingrich, Elaine Kamarck, Peter Schiff and Dennis Kucinich.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters