The Why and How of DNS Data Analysis

headshot-burt-kaliskiA network traffic analyzer can tell you what’s happening in your network, while a Domain Name System (DNS) analyzer can provide context on the “why” and “how.”

This was the theme of the recent Verisign Labs Distinguished Speaker Series discussion led by Paul Vixie and Robert Edmonds, titled Passive DNS Collection and Analysis – The “dnstap” Approach.

Vixie, a long-time Internet and DNS innovator, current CEO of Farsight Security, and recent inductee into the Internet Hall of Fame, described recent innovations in information sharing among DNS resolvers that can help network operators detect and remediate security threats. As a result of Farsight’s efforts, DNS measurements are currently being collected at the rate of 150 Mbit/s of compressed data and being made available to the Internet security community for analysis.

The dnstap approach builds on initial work on “passive DNS” data collection by Florian Weimer, where responses received from authoritative name servers by DNS resolvers are collected to understand DNS behavior and configurations. Rather than collecting network packets, dnstap is “generated from within DNS implementations” via a new protocol. The data collection operates asynchronously, meaning that regular DNS operations within resolvers continue independently of measurements being taken, thus minimizing the impact on performance.

According to Vixie, summary information made available via dnstap can help security analysts better understand the “total perimeter” of attackers by correlating relationships among different domain names, IP addresses, and other resources employed by attackers.

Vixie was careful to note that the information shared is from the “upward” or “authoritative” side of the resolvers, rather than the “downward,” or “client” side, reducing privacy concerns.

Edmonds, the software developer at Farsight Security responsible for maintaining several core components of Farsight’s Security Information Exchange (SIE) and DNSDB products, gave the detail supporting Vixie’s overview with a description of the architecture for dnstap and the Frame Streams data transport protocol (fstrm). His work bridges the legacy programming style of the DNS protocol and the more modern one involving RESTful APIs and JSON objects, making it easier to integrate DNS measurement capabilities into new applications.

“In addition to DNS requests and responses exchanged between resolvers and authoritative name servers, future extensions of these capabilities could cover other message types such as zone transfers and cache purge notifications,” Edmonds said.

Edmonds showed graphs of the modest performance impact of the new measurement capabilities compared to previous approaches and concluded with a real-time demo of one of the data feeds provided by Farsight’s information exchange, a stream of newly observed domain names. These are fully qualified domain names seen for the first time in Farsight’s database, which goes back to 2010.

Farsight is currently in the process of integrating measurement capabilities into a range of open source DNS resolvers, and Vixie said that the company plans to work with commercial providers as well.

DNS data analysis has become increasingly important as more and more applications rely on DNS as a trusted infrastructure function, and as the DNS itself continues to evolve.

For example, as detailed in Verisign Labs’ 2013 publications New gTLD Security and Stability Considerations and New gTLD Security, Stability, Resiliency Update: Exploratory Consumer Impact Analysis, DNS data from the root servers provides important insights into the potential risks to installed systems of name collisions resulting from the introduction of new generic top-level domains (gTLDs).

As with any good use of “big data,” the whole can be greater than the sum of the parts.  It may be beneficial to combine DNS traffic collected from resolvers (still on the authoritative side, consistent with Vixie’s comments) with similar traffic collected at authoritative name servers to get a more complete picture of the “why” and “how.” This can be especially important as new techniques such as qname-minimization change the traffic flow between resolvers and authoritative name servers. The combined view may also help maintain a complete picture if traffic flow changes due to the proposed introduction of additional root servers.

In today’s highly interconnected computing environments, system-level analysis is essential. This applies both in terms of attack modeling (a point made in a recent conference paper coauthored by Verisign’s Chief Security Officer Danny McPherson and Principal Research Scientist Eric Osterweil), as well as information sharing. Verisign Labs is exploring new ways to improve attack defenses through information sharing that adapt to changes in the DNS and new application requirements. Measurement capabilities like dnstap are an important tool to have in the portfolio.

What tools do you find helpful for understanding “why” and “how” on your network?

Share:

Burt Kaliski

Senior Vice President and Chief Technology Officer. As Verisign’s chief technology officer, Burt is responsible for the company’s long-term technology vision. He is the leader of Verisign Labs, which focuses on applied research, university collaboration, industry thought leadership, and intellectual property strategy. He also facilitates the technical community within Verisign and works closely with Verisign’s executive leadership team to turn... Read More →

Leave a Reply