In Network Security Design, It’s About the Users

One of the longstanding goals of network security design is to be able to prove that a system – any system – is secure.

Designers would like to be able to show that a system, properly implemented and operated, meets its objectives for confidentiality, integrity, availability and other attributes against the variety of threats the system may encounter.

A half century into the computing revolution, this goal remains elusive.

One reason for the shortcoming is theoretical: Computer scientists have made limited progress in proving lower bounds for the difficulty of solving the specific mathematical problems underlying most of today’s cryptography. Although those problems are widely believed to be hard, there’s no assurance that they must be so – and indeed it turns out that some of them may be quite easy to solve given the availability of a full-scale quantum computer.

Another reason is a quite practical one: Even given building blocks that offer a high level of security, designers, as well as implementers, may well put them together in unexpected ways that ultimately undermine the very goals they were supposed to achieve.

BUILDING AN INSECURE SYSTEM OUT OF PERFECTLY GOOD CRYPTOGRAPHY

Dr. Radia Perlman, a networking and security pioneer, Internet Hall of Fame inductee and EMC Fellow, recently shared her perspectives on the challenges of practical security in a lecture for Verisign Labs’ Distinguished Speaker Series. Speaking on the topic, “How to Build an Insecure System out of Perfectly Good Cryptography,” Radia began with a simple example based on the famous one-time pad, one of the few known unconditionally secure cryptosystems. In her talk, she showed how two users could each individually encrypt a message securely with a one-time pad—and yet still reveal enough information through the ciphertexts they exchange for an adversary to uncover the message. An insecure system has thus been built out of perfectly good cryptography.

Radia’s research career began with a similarly healthy dose of skepticism about a supposed proof about the stability of the ARPANET, the predecessor to today’s Internet. Another researcher had published a proof that the ARPANET routing protocols were correct and that the system could not become unstable. Radia offered a counterexample showing that if three particular routing messages were sent, the network would become permanently unstable. The researcher’s response:  “If you put in bad data, what do you expect?”

Security, Radia observed, is not about what happens in ideal situations, but in the reality of errors and threats.

(I recall a software implementation I made years ago where in an ideal situation all input lengths were within their accepted ranges, I was confident that an encryption algorithm performed correctly. It did not take long for someone to discover a buffer overflow attack. If only I had thought more about the kinds of issues Radia raised at the time!)

Radia’s series of vignettes continued with comments on standards development as a series of unacknowledged idea exchanges between competing moving targets, and additional examples of practical security challenges from the history of ITU-T X.509 certificates, Privacy-Enhanced Mail and credential management systems. Her remarks on certificate management echo points Verisign Labs has made about the benefits of publishing certificates as DNS records, and she goes a step further in recommending that trust anchors start at the user’s organization, to further reduce the risk of compromise of other points in the system.

Moving to a discussion of user interaction, she pointed out the challenges in “secure” screen savers that sometimes require just a single key to be typed, other times a password, and still other times both a username and a password. Individually, all three are effective – but if you’re giving a presentation from your laptop and you’ve paused long enough for the screen saver to enter the third mode but you think it’s in the second, you might find yourself typing your password in the username field for a live demonstration of another case of perfectly good cryptography turned insecure.

THE SPECTRUM OF USABILITY AND SECURITY TRADEOFFS

With other memorable examples of password rules and security questions, it is easy to understand Radia’s conclusion that in the spectrum of security/usability tradeoffs, not only have designers not achieved any point on the optimal balance between the two – the diagonal line in the figure – but hardly any of either dimension.

The spectrum of usability and security tradeoffs

The classic volume Network Security, which the speaker co-authored, concludes with the observation: “[humans] are sufficiently pervasive that we must design our protocols around their limitations.” Networks and applications are built by humans, used by humans, and attacked by humans. If we want a system to be secure, following Radia’s wise advice, we need to design it for humans – and protect it against humans as well. That advice will prove to be much more impactful than any mathematical assurance could be.

Share:

Burt Kaliski

Dr. Burt Kaliski Jr., Senior Vice President and Chief Technology Officer, leads Verisign’s long-term research program. Through the program’s innovation initiatives, the CTO organization, in collaboration with business and technology leaders across the company, explores emerging technologies, assesses their impact on the company’s business, prototypes and evaluates new concepts, and recommends new strategies and solutions. Burt is also responsible for... Read More →