Part 1 of 4 – Introduction: ICANN’s Alternative Path to Delegation

As widely discussed recently, observed within the ICANN community several years ago, and anticipated in the broader technical community even earlier, the introduction of a new generic top-level domain (gTLD) at the global DNS root could result in name collisions with previously installed systems. Such systems sometimes send queries to the global DNS with domain name suffixes that, under reasonable assumptions at the time the systems were designed, may not have been expected to be delegated as gTLDs. The introduction of a new gTLD may conflict with those assumptions, such that the newly delegated gTLD collides with a domain name suffix in use within an internal name space, or one that is appended to a domain name as a result of search-list processing.

(more…)

Part 3 of 4 – Name Collision Mitigation Requires Qualitative Analysis

As discussed in the several studies on name collisions published to date, determining which queries are at risk, and thus how to mitigate the risk, requires qualitative analysis (New gTLD Security and Stability Considerations; New gTLD Security, Stability, Resiliency Update: Exploratory Consumer Impact AnalysisName Collisions in the DNS). Blocking a second level domain (SLD) simply on the basis that it was queried for in a past sample set runs a significant risk of false positives. SLDs that could have been delegated safely may be excluded on quantitative evidence alone, limiting the value of the new gTLD until the status of the SLD can be proven otherwise.

Similarly, not blocking an SLD on the basis that it was not queried for in a past sample set runs a comparable risk of false negatives.

A better way to deal with the risk is to treat not the symptoms but the underlying problem: that queries are being made by installed systems (or internal certificates are being employed by them) under the assumption that certain gTLDs won’t be delegated.

(more…)

How Financial Institutions Can Up Their Game Against DDoS Attacks

With the ease of access to the internet and prevalence of social media today, unsuspecting computer users are making it easier than ever for malicious actors to target them with malcode. This trend has helped provide the perfect environment for Distributed Denial of Service (DDoS) attacks to grow in size, complexity and range of targets. Today’s attacks are not limited to web infrastructure; attackers are increasingly targeting the Domain Name System (DNS) infrastructure as well. This trend has been particularly noticeable in the financial industry, which has been hit hard over the last year.

(more…)

web network

Part 2 of 4 – DITL Data Isn’t Statistically Valid for This Purpose

For several years, DNS-OARC has been collecting DNS query data “from busy and interesting DNS name servers” as part of an annual “Day-in-the-Life” (DITL) effort (an effort originated by CAIDA in 2002) that I discussed in the first blog post in this series. DNS-OARC currently offers eight such data sets, covering the queries to many but not all of the 13 DNS root servers (and some non-root data) over a two-day period or longer each year from 2006 to present.  With tens of billions of queries, the data sets provide researchers with a broad base of information about how the world is interacting with the global DNS as seen from the perspective of root and other name server operators.

In order for second-level domain (SLD) blocking to mitigate the risk of name collisions for a given gTLD, it must be the case that the SLDs associated with at-risk queries occur with sufficient frequency and geographical distribution to be captured in the DITL data sets with high probability. Because it is a purely quantitative countermeasure, based only on the occurrence of a query, not the context around it, SLD blocking does not offer a model for distinguishing at-risk queries from queries that are not at risk.  Consequently, SLD blocking must make a stronger assumption to be effective:  that any queries involving a given SLD occur with sufficient frequency and geographical distribution to be captured with high probability.

Put another way, the DITL data set – limited in time to an annual two-day period and in space to the name servers that participate in the DITL study – offers only a sample of the queries from installed systems, not statistically significant evidence of their behavior and of which at-risk queries are actually occurring.

(more…)

Tips to Protect E-Commerce Website Availability and Security During the Holidays

With the holiday shopping season quickly approaching, internet retailers are gearing up for an onslaught of web traffic – which is great, as long as they have the right measures in place to keep their customers safe and satisfied.

Even one hour of downtime due to a website outage or a malicious attack can have significant impact on a retailer’s reputation and revenue, especially during the holidays, a time which the National Retail Federation says can add up to 40 percent of an online retailer’s annual revenue. With some large e-commerce sites earning millions each day during the holiday season, even a few minutes of downtime can lead to financial losses in the tens of thousands of dollars, not to mention customer frustration.

(more…)

Part 5 of 5; New gTLD SSR-2: Exploratory Consumer Impact Analysis

Throughout this series of blog posts we’ve discussed a number of issues related to security, stability and resilience of the DNS ecosystem, particularly as we approach the rollout of new gTLDs. Additionally, we highlighted a number of issues that we believe are outstanding and need to be resolved before the safe introduction of new gTLDs can occur – and we tried to provide some context as to why, all the while continuously highlighting that nearly all of these unresolved recommendations came from parties in addition to Verisign over the last several years. We received a good bit of flack from a small number of folks asking why we’re making such a stink about this, and we’ve attempted to meter our tone while increasing our volume on these matters. Of course, we’re not alone in this, as a growing list of others have illustrated, e.g., SSAC SAC059’s Conclusion, published just a little over 90 days ago, illustrates this in part:

The SSAC believes that the community would benefit from further inquiry into lingering issues related to expansion of the root zone as a consequence of the new gTLD program. Specifically, the SSAC recommends those issues that previous public comment periods have suggested were inadequately explored as well as issues related to cross-functional interactions of the changes brought about by root zone growth should be examined. The SSAC believes the use of experts with experience outside of the fields on which the previous studies relied would provide useful additional perspective regarding stubbornly unresolved concerns about the longer-term management of the expanded root zone and related systems.

(more…)

Part 4 of 5; NXDOMAINS, SSAC’s SAC045, and new gTLDs

In 2010, ICANN’s Security and Stability Advisory Committee (SSAC) published SAC045, a report calling attention to particular problems that may arise should a new gTLD applicant use a string that has been seen with measurable (and meaningful) frequency in queries for resolution by the root system. The queries to which they referred involved invalid top-level domain (TLD) queries (i.e., non-delegated strings) at the root level of the domain name system (DNS), queries which elicit responses commonly referred to as Name Error, or NXDomain, responses from root name servers.

(more…)

Part 3 of 5: Name Collisions, Why Every Enterprise Should Care

Do you recall when you were a kid and you experienced for the first time an unnatural event where some other kid “stole” your name and their parents were now calling their child by your name, causing much confusion for all on the playground? And how this all made things even more complicated – or at least unnecessarily complex when you and that kid shared a classroom and teacher, or street, or coach and team, and just perhaps that kid even had the same surname as you, amplifying the issue! What you were experiencing was a naming collision (in meatspace).

(more…)

Part 2 of 5: Internet Infrastructure: Stability at the Core, Innovation at the Edge

For nearly all communications on today’s internet, domain names play a crucial role in providing stable navigation anchors for accessing information in a predictable and safe manner, irrespective of where you’re located or the type of device or network connection you’re using. The underpinnings of this access are made possible by the Domain Name System (DNS), a behind the scenes system that maps human-readable mnemonic names (e.g.,www.Verisign.com) to machine-usable internet addresses (e.g., 69.58.187.40). The DNS is on the cusp of expanding profoundly in places where it’s otherwise been stable for decades and absent some explicit action may do so in a very dangerous manner.

(more…)