There is a tendency to believe that cyber security is different.
Security is a curious concept, part perception and part prognostication. Webster's captures both sides of this dichotomy in its definition: “freedom from danger; freedom from fear.”
Fear concerns a perceived threat; danger reflects what actually threatens us. We fear attacks that never come and live blissfully unaware of true dangers around the corner. This uncertainty is only intensified when our security concerns are transformed by changes in technology.
Indeed, technology and its abuse evolve hand in hand. Just as the internal combustion engine begat highway accidents and auto theft, many of the most visible and transformative successes of computing technology — e-mail, databases, e-commerce, the Web and so on — have ushered in whole new classes of abuse.
But the evolution of computer security is not merely some dark mirror, passively reflecting advances in technology. While technology provides new opportunities for threats, these become true dangers only when there is a motivation to exploit them and a means to do so. Anticipating security threats is not merely a matter of reasoning abstractly about how new technology might raise new risks; it requires an understanding of human nature.
Driven by commerce
Today, the evolution of computer abuse — and therefore of computer security — is driven by commerce. Botnets, spam, phishing, banking trojans, identity theft and so on are all commercially motivated enterprises perfected in a constant arms race with a well-financed computer security industry.
As little as a decade ago, this ecosystem did not exist. The computer viruses and worms of the 20th century were joy riders, driven primarily by an ambition for notoriety.
But once it became possible to make money from computer infection, whether through advertising (like spam) or theft (like stealing bank account credentials), this economic engine fed a bloom in online crime that we are still experiencing.
Such economically motivated attacks are unlikely to disappear, and we can expect new threats to directly reflect each new technical innovation in how money is used, moved and stored. Emerging cellphone-based payment systems, automated banking transfers and the increasingly liquid markets for online goods in multiplayer games will all be ripe targets for online crooks.
While criminal profit-seeking is perhaps the largest force transforming the computer security landscape today, it is by no means the only one. Another is the large-scale collection and use of personal data.
As we leave ever more detailed online footprints — via purchasing, browsing and social relationships — a vast “big data” ecosystem has emerged to collect, process and resell this information. Concerns about this issue are typically framed in terms of privacy: How much do I want others to know about me? How might it affect my ability to get health insurance, employment or credit?
While these are important questions, they do not capture the full extent of how this data might be used — not just to extract information about people's desires and social relationships, but to use that understanding to affect their behaviour. Nor would this be limited to the banal goal of getting you to purchase a particular product; it could be used for Internet-scale social monitoring and manipulation.
The ease with which we adopt online personas and relationships has created a collective blind spot that computer technology is well suited to exploit. Advances in natural-language processing and data mining make it entirely feasible to mint millions of “social bots,” each establishing online friendships with their targets like virtual con men, each building trust over time and delivering personalised messages designed to elicit information, sway opinion or call to action.
This idea, which one of my colleagues has called “social architecture,” completely upends traditional computer security concerns: The threat is not of humans controlling or monitoring our computers, but precisely the converse.
As an instrument of war
Finally, there is growing potential for the abuse of computers as an instrument of war. The obvious issues involve espionage and information theft, but the real transformation is much broader.
The Stuxnet worm, designed to sabotage gas centrifuges in Iran, made it clear that computer attacks can have physical, real-world consequences — a particularly troubling precedent because computing capabilities are now embedded in virtually every aspect of our lives. The power we use, the water we drink, the cars, planes and trains we travel in, the elevators and air-conditioning in our buildings, even many of our children's toys — all are controlled by computers.
A parallel trend, fuelled by cheap wireless connectivity, is that these devices are increasingly networked. And while few of these systems have been attacked in anger, it is this very fact that leads most of them to be rife with vulnerabilities — a sheltered ecosystem with no immunity to attacks from an outside invader.
Earlier this year, my colleagues and I demonstrated weaknesses that allowed us to remotely infiltrate, track and control popular automobiles more than 1,000 miles away. Other researchers have demonstrated remote attacks on implantable cardiac defibrillators, smart power meters, utility control networks and so on.
The crucial question is whether these are merely Chicken Little fears or real dangers. And the answer will be a matter not of technology but of politics. Do conflicting powers believe that such attacks will advance their aims better than alternatives, that they are worth the effort to develop, that they are worth the risks of retaliation?
There is a tendency to believe that computer security is different from other security. Maybe because computing is mechanistic and predictable, we like to think that security questions should succumb to some form of deterministic analysis.
But security is at its heart a human issue. It is about conflict, and computers are merely a medium by which conflict can be expressed. The future of computer security, then, is less about the future of technology than it is about the future of human relations. (Stefan Savage is a professor of computer science and engineering at the University of California, San Diego.) — New York Times News Service