Technology alone is not enough to solve the problem we face
Pete Gamlen for The New York Times
In the movies, you can tell the best hackers by how they type. The faster they punch the keys, the more dangerous they are. Hacking is portrayed as a highly technical feat, a quintessentially technological phenomenon.
This impression of high-tech wizardry pervades not just our popular culture but also our real-world attempts to combat cybercrime. If cybercrime is a sophisticated high-tech feat, we assume, the solution must be too. Cybersecurity companies hype proprietary tools like “next generation” firewalls, anti-malware software and intrusion-detection systems. Policy experts like John Ratcliffe, a former director of national intelligence, urge us to invest public resources in a hugely expensive “cyber Manhattan Project” that will supercharge our digital capabilities.
But this whole concept is misguided. The principles of computer science dictate that there are hard, inherent limits to how much technology can help. Yes, it can make hacking harder, but it cannot possibly, even in theory, stop it. What’s more, the history of hacking shows that the vulnerabilities hackers exploit are as often human as technical — not only the cognitive quirks discovered by behavioural economists but also old-fashioned vices like greed and sloth.
To be sure, you should enable two-factor authentication and instal those software updates that you’ve been putting off. But many of the threats we face are rooted in the nature of human and group behaviour. The solutions will need to be social too — job creation programmes, software liability reform, cryptocurrency regulation and the like.
For the past four years, I have taught a cybersecurity class at Yale Law School in which I show my students how to break into computers. Having grown up with a user-friendly web, my students generally have no real idea how the Internet or computers work. They are surprised to find how easily they learn to hack and how much they enjoy it. (I do, too, and I didn’t hack a computer until I was 52.) By the end of the semester, they are cracking passwords, cloning websites and crashing servers.
Why do I teach idealistic young people how to lead a life of cybercrime? Many of my students will pursue careers in government or with law firms whose clients include major technology companies. I want these budding lawyers to understand their clients’ issues. But my larger aim is to put technical expertise in its place: I want my students to realise that technology alone is not enough to solve the problems we face.
I start my class by explaining the fundamental principle of modern computing: The distinction between code and data. Code is a set of instructions: “add,” “print my résumé,” “shut the door.” Data is information. Data is usually represented by numbers (the temperature is 80 degrees), code by words (“add”). But in 1936, the British mathematician Alan Turing figured out that code could be represented by numbers as well. Indeed, Turing was able to show how to represent both code and data using only ones and zeros — so-called binary strings.
This groundbreaking insight makes modern computers possible. We don’t need to rebuild our computers for every new programme. We can feed our devices whatever code we like as binary strings and run that program. That zeros and ones can represent both code and data is, however, a blessing and a curse, because it enables hackers to trick computers that are expecting data into accepting and running malicious code instead.
Consider a simple hack I teach my students. An attacker sends an email that has a file attached. Because the file has a “.txt” extension, we assume it is a plain text file — that is, data — perhaps a grocery list or grades on a final exam. But when we open the file, the operating system will not only send the data to the screen, it will also execute the malicious code that the hacker has secretly embedded, allowing him to seize control of your computer.
You can instal security software to lessen this risk. But to eliminate the risk, you would have to prevent computers from treating binary numbers as both code and data — which would mean stopping them from being modern computers.
The good news is that there are promising ways to tackle the human dimensions of the problem — that is, the social, economic and psychological aspects. The bad news is that we have largely failed to pursue them.
Consider legal liability. The law offers few incentives for software developers to write better, more secure code. It rarely imposes substantial penalties for data breaches, which means that tech companies lack a financial motivation to take security seriously. The median American company budgets 10 percent for I.T., and 24 percent of that on security. That’s roughly 2 percent earmarked for protecting activities that companies understand, rightly, to be critical to their operations.
We can change that business calculus. We should, for example, hold software companies financially responsible for negligently building insecure software, a proposal recently endorsed by President Biden’s National Cybersecurity Strategy. Instead of shelling out money for private companies to fix bad technology, legislators should get them to produce good technology in the first place.
We can also help hackers themselves. Hackers are often thought of as brilliant disaffected young men who live in their parents’ basements and wreak havoc for the sheer fun of it. The truth is more familiar. Cybercriminals are, by and large, out to make a living — often in the absence of legitimate ways to use their skills.
Diversion programmes in Britain and the Netherlands run hacking competitions where teams of coders compete to hack a target network; these programs also seek to match up coders with older security personnel to act as mentors and direct their charges into the legitimate cybersecurity industry. At the moment, with an estimated 3.5 million jobs unfilled worldwide, one fewer attacker is one more desperately needed defender.
Towards the end of the semester, my class covers cryptocurrency, the “money” favoured by cybercriminals. Opening a cryptocurrency account should be like opening a bank account: Customers should have to provide their Social Security number, government-issued identification and other personal identifying data. While U.S. law requires most cryptocurrency companies to follow such disclosure rules, it exempts certain brokers from collecting this information — and cybercriminals like using those brokers to escape detection.
Figuring out how hacking works is the easy part. Figuring out how humans work, and what to do about it, is the hard part. And even when we get it right, we must remember that neither technology nor regulation is a panacea. In the 21st century, cybercrime is increasingly just crime — and there is no way to end that most human of glitches.
This article originally appeared in The New York Times.