Internet Hacking Is About to Get Much Worse
We can no longer leave online security to the market.
By Bruce Schneier
Mr. Schneier is a fellow and lecturer at the Harvard Kennedy School.
It’s no secret that computers are insecure. Stories like the recent Facebook hack, the Equifax hack and the hacking of government agencies are remarkable for how unremarkable they really are. They might make headlines for a few days, but they’re just the newsworthy tip of a very large iceberg.
The risks are about to get worse, because computers are being embedded into physical devices and will affect lives, not just our data. Security is not a problem the market will solve. The government needs to step in and regulate this increasingly dangerous space.
The primary reason computers are insecure is that most buyers aren’t willing to pay — in money, features, or time to market — for security to be built into the products and services they want. As a result, we are stuck with hackable internet protocols, computers that are riddled with vulnerabilities and networks that are easily penetrated.
We have accepted this tenuous situation because, for a very long time, computer security has mostly been about data. Banking data stored by financial institutions might be important, but nobody dies when it’s stolen. Facebook account data might be important, but again, nobody dies when it’s stolen. Regardless of how bad these hacks are, it has historically been cheaper to accept the results than to fix the problems. But the nature of how we use computers is changing, and that comes with greater security risks.
Many of today’s new computers are not just screens that we stare at, but objects in our world with which we interact. A refrigerator is now a computer that keeps things cold; a car is now a computer with four wheels and an engine. These computers sense us and our environment, and they affect us and our environment. They talk to each other over networks, they are autonomous, and they have physical agency. They drive our cars, pilot our planes, and run our power plants. They control traffic, administer drugs into our bodies, and dispatch emergency services. These connected computers and the network that connects them — collectively known as “the internet of things” — affect the world in a direct physical manner.
We’ve already seen hacks against robot vacuum cleaners, ransomware that shut down hospitals and denied care to patients, and malware that shut down cars and power plants. These attacks will become more common, and more catastrophic. Computers fail differently than most other machines: It’s not just that they can be attacked remotely — they can be attacked all at once. It’s impossible to take an old refrigerator and infect it with a virus or recruit it into a denial-of-service botnet, and a car without an internet connection simply can’t be hacked remotely. But that computer with four wheels and an engine? It — along with all other cars of the same make and model — can be made to run off the road, all at the same time.
As the threats increase, our longstanding assumptions about security no longer work. The practice of patching a security vulnerability is a good example of this. Traditionally, we respond to the never-ending stream of computer vulnerabilities by regularly patching our systems, applying updates that fix the insecurities. This fails in low-cost devices, whose manufacturers don’t have security teams to write the patches: if you want to update your DVR or webcam for security reasons, you have to throw your old one away and buy a new one. Patching also fails in more expensive devices, and can be quite dangerous. Do we want to allow vulnerable automobiles on the streets and highways during the weeks before a new security patch is written, tested, and distributed?
Another failing assumption is the security of our supply chains. We’ve started to see political battles about government-placed vulnerabilities in computers and software from Russia and China. But supply chain security is about more than where the suspect company is located: we need to be concerned about where the chips are made, where the software is written, who the programmers are, and everything else.
Last week, Bloomberg reported that China inserted eavesdropping chips into hardware made for American companies like Amazon and Apple. The tech companies all denied the accuracy of this report, which precisely illustrates the problem. Everyone involved in the production of a computer must be trusted, because any one of them can subvert the security. As everything becomes a computer and those computers become embedded in national-security applications, supply-chain corruption will be impossible to ignore.
These are problems that the market will not fix. Buyers can’t differentiate between secure and insecure products, so sellers prefer to spend their money on features that buyers can see. The complexity of the internet and of our supply chains make it difficult to trace a particular vulnerability to a corresponding harm. The courts have traditionally not held software manufacturers liable for vulnerabilities. And, for most companies, it has generally been good business to skimp on security, rather than sell a product that costs more, does less, and is on the market a year later.
The solution is complicated, and it’s one I devoted my latest book to answering. There are technological challenges, but they’re not insurmountable — the policy issues are far more difficult. We must engage with the future of internet security as a policy issue. Doing so requires a multifaceted approach, one that requires government involvement at every step.
First, we need standards to ensure that unsafe products don’t harm others. We need to accept that the internet is global and regulations are local, and design accordingly. These standards will include some prescriptive rules for minimal acceptable security. California just enacted an Internet of Things security law that prohibits default passwords. This is just one of many security holes that need to be closed, but it’s a good start.
We need to accept that the internet is global and regulations are local, and design accordingly.
We also need our standards to be flexible and easy to adapt to the needs of various companies, organizations, and industries. The National Institute of Standards and Technology’s Cybersecurity Framework is an excellent example of this, because its recommendations can be tailored to suit the individual needs and risks of organizations. The Cybersecurity Framework — which contains guidance on how to identify, prevent, recover, and respond to security risks — is voluntary at this point, which means nobody follows it.Making it mandatory for critical industries would be a great first step. An appropriate next step would be to implement more specific standards for industries like automobiles, medical devices, consumer goods, and critical infrastructure.
Second, we need regulatory agencies to penalize companies with bad security, and a robust liability regime. The Federal Trade Commission is starting to do this, but it can do much more. It needs to make the cost of insecurity greater than the cost of security, which means that fines have to be substantial. The European Union is leading the way in this regard: they’ve passed a comprehensive privacy law, and are now turning to security and safety. The United States can and should do the same.
We need to ensure that companies are held accountable for their products and services, and that those affected by insecurity can recover damages. Traditionally, United States courts have declined to enforce liabilities for software vulnerabilities, and those affected by data breaches have been unable to prove specific harm. Here, we need statutory damages — harms spelled out in the law that don’t require any further proof.
Finally, we need to make it an overarching policy that security takes precedence over everything else. The internet is used globally, by everyone, and any improvements we make to security will necessarily help those we might prefer remain insecure: criminals, terrorists, rival governments. Here, we have no choice. The security we gain from making our computers less vulnerable far outweighs any security we might gain from leaving insecurities that we can exploit.
Regulation is inevitable. Our choice is no longer between government regulation and no government regulation, but between smart government regulation and ill-advised government regulation. Government regulation is not something to fear. Regulation doesn’t stifle innovation, and I suspect that well-written regulation will spur innovation by creating a market for security technologies.
No industry has significantly improved the security or safety of its products without the government stepping in to help. Cars, airplanes, pharmaceuticals, consumer goods, food, medical devices, workplaces, restaurants, and, most recently, financial products — all needed government regulation in order to become safe and secure.
Getting internet safety and security right will depend on people: people who are willing to take the time and expense to do the right things; people who are determined to put the best possible law and policy into place. The internet is constantly growing and evolving; we still have time for our security to adapt, but we need to act quickly, before the next disaster strikes. It’s time for the government to jump in and help. Not tomorrow, not next week, not next year, not when the next big technology company or government agency is hacked, but now.
Bruce Schneier is a fellow and lecturer at the Harvard Kennedy School. His latest book is “Click Here to Kill Everyone: Security and Survival in a Hyper-connected World.”
Follow The New York Times Opinion section on Facebook and Twitter (@NYTopinion).
Follow Us!