We recently hosted Dr. Ralph Merkle as a guest speaker for the Verisign Labs Distinguished Speaker Series. His talk, “Quantum Computers and Public-Key Cryptosystems,” was a great presentation on how molecular nanotechnology — the ability to economically manufacture most arrangements of atoms permitted by physical law — could fundamentally alter the world as we know it. Ralph’s and many others’ research on this topic has been groundbreaking and we are grateful he took the time to come and share his knowledge.
Part 4 of 4 – Conclusion: SLD Blocking Is Too Risky without TLD Rollback
ICANN’s second-level domain (SLD) blocking proposal includes a provision that a party may demonstrate that an SLD not in the initial sample set could cause “severe harm,” and that SLD can potentially be blocked for a certain period of time. The extent to which that provision would need to be exercised remains to be determined. However, given the concerns outlined in Part 2 and Part 3 of this series, it seems likely that there could be many additions (and deletions!) from the blocked list given the lack of correlation between the DITL data and actual at-risk queries.
Part 1 of 4 – Introduction: ICANN’s Alternative Path to Delegation
As widely discussed recently, observed within the ICANN community several years ago, and anticipated in the broader technical community even earlier, the introduction of a new generic top-level domain (gTLD) at the global DNS root could result in name collisions with previously installed systems. Such systems sometimes send queries to the global DNS with domain name suffixes that, under reasonable assumptions at the time the systems were designed, may not have been expected to be delegated as gTLDs. The introduction of a new gTLD may conflict with those assumptions, such that the newly delegated gTLD collides with a domain name suffix in use within an internal name space, or one that is appended to a domain name as a result of search-list processing.
Part 3 of 4 – Name Collision Mitigation Requires Qualitative Analysis
As discussed in the several studies on name collisions published to date, determining which queries are at risk, and thus how to mitigate the risk, requires qualitative analysis (New gTLD Security and Stability Considerations; New gTLD Security, Stability, Resiliency Update: Exploratory Consumer Impact Analysis; Name Collisions in the DNS). Blocking a second level domain (SLD) simply on the basis that it was queried for in a past sample set runs a significant risk of false positives. SLDs that could have been delegated safely may be excluded on quantitative evidence alone, limiting the value of the new gTLD until the status of the SLD can be proven otherwise.
Similarly, not blocking an SLD on the basis that it was not queried for in a past sample set runs a comparable risk of false negatives.
A better way to deal with the risk is to treat not the symptoms but the underlying problem: that queries are being made by installed systems (or internal certificates are being employed by them) under the assumption that certain gTLDs won’t be delegated.
Part 2 of 4 – DITL Data Isn’t Statistically Valid for This Purpose
For several years, DNS-OARC has been collecting DNS query data “from busy and interesting DNS name servers” as part of an annual “Day-in-the-Life” (DITL) effort (an effort originated by CAIDA in 2002) that I discussed in the first blog post in this series. DNS-OARC currently offers eight such data sets, covering the queries to many but not all of the 13 DNS root servers (and some non-root data) over a two-day period or longer each year from 2006 to present. With tens of billions of queries, the data sets provide researchers with a broad base of information about how the world is interacting with the global DNS as seen from the perspective of root and other name server operators.
In order for second-level domain (SLD) blocking to mitigate the risk of name collisions for a given gTLD, it must be the case that the SLDs associated with at-risk queries occur with sufficient frequency and geographical distribution to be captured in the DITL data sets with high probability. Because it is a purely quantitative countermeasure, based only on the occurrence of a query, not the context around it, SLD blocking does not offer a model for distinguishing at-risk queries from queries that are not at risk. Consequently, SLD blocking must make a stronger assumption to be effective: that any queries involving a given SLD occur with sufficient frequency and geographical distribution to be captured with high probability.
Put another way, the DITL data set – limited in time to an annual two-day period and in space to the name servers that participate in the DITL study – offers only a sample of the queries from installed systems, not statistically significant evidence of their behavior and of which at-risk queries are actually occurring.
Diversity, Openness and vBSDcon 2013
“There never were in the world two opinions alike, no more than two hairs or two grains; the most universal quality is diversity”
–Michel Eyquem, seigneur de Montaigne (1533–1592)
Diversity is a central design principle of the Domain Name System. With respect to the DNS root, it’s the reason that there are 13 separately managed root servers with 12 independent operators. It’s the reason Verisign operates the two root servers we’re responsible for – the A and J roots – as well as other name servers – at multiple locations around the world. It’s also the reason that within these locations operated by Verisign, multiple physical servers handle the incoming traffic. And it’s the reason that among these multiple servers, we use multiple hardware and software platforms, as well as multiple network providers.
In other words, diversity is one reason the DNS industry in general, and Verisign in particular, doesn’t do everything the same way and in the same place.
New gTLD Queries at the Root & Heisenberg’s Uncertainty Principle
Since we published our second SSR report a few weeks back, recently updated with revision 1.1, we’ve been taking a deeper look at queries to the root servers that elicit “Name Error,” or NXDomain responses and figured we’d share some preliminary results.
Part 5 of 5; New gTLD SSR-2: Exploratory Consumer Impact Analysis
Throughout this series of blog posts we’ve discussed a number of issues related to security, stability and resilience of the DNS ecosystem, particularly as we approach the rollout of new gTLDs. Additionally, we highlighted a number of issues that we believe are outstanding and need to be resolved before the safe introduction of new gTLDs can occur – and we tried to provide some context as to why, all the while continuously highlighting that nearly all of these unresolved recommendations came from parties in addition to Verisign over the last several years. We received a good bit of flack from a small number of folks asking why we’re making such a stink about this, and we’ve attempted to meter our tone while increasing our volume on these matters. Of course, we’re not alone in this, as a growing list of others have illustrated, e.g., SSAC SAC059’s Conclusion, published just a little over 90 days ago, illustrates this in part:
The SSAC believes that the community would benefit from further inquiry into lingering issues related to expansion of the root zone as a consequence of the new gTLD program. Specifically, the SSAC recommends those issues that previous public comment periods have suggested were inadequately explored as well as issues related to cross-functional interactions of the changes brought about by root zone growth should be examined. The SSAC believes the use of experts with experience outside of the fields on which the previous studies relied would provide useful additional perspective regarding stubbornly unresolved concerns about the longer-term management of the expanded root zone and related systems.
Part 4 of 5; NXDOMAINS, SSAC’s SAC045, and new gTLDs
In 2010, ICANN’s Security and Stability Advisory Committee (SSAC) published SAC045, a report calling attention to particular problems that may arise should a new gTLD applicant use a string that has been seen with measurable (and meaningful) frequency in queries for resolution by the root system. The queries to which they referred involved invalid top-level domain (TLD) queries (i.e., non-delegated strings) at the root level of the domain name system (DNS), queries which elicit responses commonly referred to as Name Error, or NXDomain, responses from root name servers.
Part 3 of 5: Name Collisions, Why Every Enterprise Should Care
Do you recall when you were a kid and you experienced for the first time an unnatural event where some other kid “stole” your name and their parents were now calling their child by your name, causing much confusion for all on the playground? And how this all made things even more complicated – or at least unnecessarily complex when you and that kid shared a classroom and teacher, or street, or coach and team, and just perhaps that kid even had the same surname as you, amplifying the issue! What you were experiencing was a naming collision (in meatspace).