Minimum Disclosure: What Information Does a Name Server Need to Do Its Job?

Two principles in computer security that help bound the impact of a security compromise are the principle of least privilege and the principle of minimum disclosure or need-to-know.

As described by Jerome Saltzer in a July 1974 Communications of the ACM article, Protection and the Control of Information Sharing in Multics, the principle of least privilege states, “Every program and every privileged user should operate using the least amount of privilege necessary to complete the job.”

Need-to-know is the counterpart for sharing information: a system component should be given just enough information to perform its role, and no more. The US Department of Health and Human services adopts this principle in the HIPAA privacy policy, for example, which states: “protected health information should not be used or disclosed when it is not necessary to satisfy a particular purpose or carry out a function.”

There may be tradeoffs, of course, between minimizing the amount of privilege or information given to a component in a system, and other objectives such as performance or simplicity. For instance, a component may be able to do its job more efficiently if given more than the minimum amount.  And it may be easier just to share more than is needed, than to extract out just the minimum required. The minimum amounts of privilege may also be hard to determine exactly, and they might change over time as the system evolves or if it is used in new ways.

Least privilege is well established in DNS through the delegation from one name server to another of just the authority it needs to handle requests within a specific subdomain. The principle of minimum disclosure has come to the forefront recently in the form of a technique called qname-minimization, which aims to improve privacy in the Domain Name System (DNS).

(more…)

Help Ensure the Availability and Security of Your Enterprise DNS with Verisign Recursive DNS

At Verisign, we’ve made the Domain Name System (DNS) our business for more than 17 years. We support the availability of critical Internet infrastructure like .com and .net top-level domains (TLDs) and the A and J Internet Root Servers, and we provide critical Managed DNS services that ensure the availability of externally facing websites to customers around the world.

As we continue to expand our role in Internet security, we are excited to announce the next step in protecting the stability of enterprise DNS ecosystems: Verisign Recursive DNS. This new cloud-based recursive DNS service leverages Verisign’s global, securely managed DNS infrastructure to offer the performance, reliability and security that enterprises demand when securing their internal networks and that communications safely and securely reach their intended destinations.

(more…)

The Why and How of DNS Data Analysis

headshot-burt-kaliskiA network traffic analyzer can tell you what’s happening in your network, while a Domain Name System (DNS) analyzer can provide context on the “why” and “how.”

This was the theme of the recent Verisign Labs Distinguished Speaker Series discussion led by Paul Vixie and Robert Edmonds, titled Passive DNS Collection and Analysis – The “dnstap” Approach.

(more…)

Where Do Old Protocols Go To Die?

In Ripley Scott’s classic 1982 science fiction film Blade Runner, replicant Roy Batty (portrayed by Rutger Hauer) delivers this soliloquy:

“I’ve…seen things you people wouldn’t believe…Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those…moments…will be lost in time, like (cough) tears…in…rain. Time…to die.”

The WHOIS protocol was first published as RFC 812 in March 1982 – almost 33 years ago. It was designed for use in a simpler time when the community of Internet users was much smaller. WHOIS eventually became the default registration data directory for the Domain Name System (DNS). As interest in domain names and the DNS has grown over time, attempts have been made to add new features to WHOIS. None of these attempts have been successful, and to this day we struggle with trying to make WHOIS do things it was never designed to do.

(more…)

Exploring Future Internet Architectures

UCLA and Washington University in St. Louis recently announced the launch of the Named Data Networking (NDN) Consortium, a new forum for collaboration among university and industry researchers, including Verisign, on one candidate next-generation information-centric architecture for the internet.

Verisign Labs has been collaborating with UCLA Professor Lixia Zhang, one of the consortium’s co-leaders, on this future-directed design as part our university research program for some time. The consortium launch is a natural next step in facilitating this research and its eventual application.

Van Jacobson, an Internet Hall of Fame member and the other co-leader of the NDN Consortium, surveyed developments in this area in his October 2012 talk in the Verisign Labs Distinguished Speaker Series titled, “The Future of the Internet? Content-Centric Networking.

As I stated in my summary of the talk, content-centric networking and related research areas under the heading of information-centric networking and NDN bring internet protocols up to date to match the way many of us already are using the internet. As Van noted, when people want to access content over the internet– for instance the recording of his talk – they typically reference a URL, for instance http://www.youtube.com/watch?v=3zOLrQJ5kbU.

(more…)

New from Verisign Labs: Measuring the Leakage of Onion at the Root

If you are trying to communicate anonymously on the internet using Tor, this paper may be an important read for you. Anonymity and privacy are at the core of what the Tor project promises its users. Short for The Onion Router, Tor provides individuals with a mechanism to communicate anonymously on the internet. As part of its offerings, Tor provides hidden services, specifically anonymous networking between servers that are configured to receive inbound connections only through Tor. In order to route requests to these hidden services, a namespace is used to identify the resolution requests to such services. Tor uses the .onion namespace under a non-delegated (pseudo) top-level-domain. Although the Tor system was designed to prevent .onion requests from leaking into the global DNS resolution process, numerous requests are still observed in the global DNS, causing concern about the severity of the leakage and the exposure of sensitive private data.

(more…)

Solving Challenges of Scale in Data and Language

It would not be too much of an exaggeration to say that the early internet operated on the scale of kilobytes, with all spoken languages represented using a single character encoding – ASCII. Today’s global internet, so fundamental to society and the world’s economy, now enables access to orders of magnitude more information, connecting a speakers of a full spectrum of languages.

The research challenges continue to scale along with data volumes and user diversity.

(more…)

IANA 2.0: Ensuring ICANN Accountability and Transparency for the Future

The National Telecommunications and Information Administration’s (NTIA) March 14, 2014, announcement proposing the transition of its legacy Internet Assigned Numbers Authority (IANA) stewardship role has presented the Internet Corporation for Assigned Names and Numbers (ICANN) multi-stakeholder community equal amounts of opportunity and responsibility. We have been handed a singular opportunity to define the terms of any stewardship transition and the fundamental responsibility to get it right.

Getting it right means ensuring, through a bottom-up, multi-stakeholder process, the reform of ICANN’s accountability structures to protect the community and the multi-stakeholder model prior to NTIA’s disengagement from its oversight and stewardship role. It also means acting quickly and efficiently so our window of opportunity is not missed.

(more…)

Introducing getdns: a Modern, Extensible, Open Source API for the DNS

Verisign is pleased to announce the public introduction of getdns at The Next Web in Amsterdam (TNWEurope) April 23-24, 2014. Verisign Labs and NLNet Labs in collaboration have developed getdns, an open source implementation of the getdns-api application programming interface (api) specification.

At The Next Web, getdns is one of the challenge APIs in a 36-hour Hack Battle. Multiple teams of application coding experts are using getdns to develop innovative applications that leverage the global security infrastructure available through DNS Security Extensions (DNSSEC).

(more…)

DNS Outages: The Challenges of Operating Critical Infrastructure

Recent attacks targeting enterprise websites have created greater awareness around how critical DNS is for the reliability of internet services and the potentially catastrophic impact of a DNS outage. The DNS, made up of a complex system of root and lower level name servers, translates user-friendly domain names to numerical IP addresses. With few exceptions, DNS lives in a grey area between IT and network operations. With the increasing occurrences of distributed denial of service (DDoS) attacks, advanced persistent threats (APTs) and exploitation of user errors through techniques such as typosquatting and phishing, enterprises can no longer take a passive role in managing their DNS internet infrastructure.

(more…)