Verisign Blog https://blog.verisign.com/ A global provider of critical internet infrastructure and domain name registry services Mon, 18 Mar 2024 17:12:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://blog.verisign.com/wp-content/uploads/2016/06/cropped-favicon-32x32.png Verisign Blog https://blog.verisign.com/ 32 32 Domain Name Industry Brief Quarterly Report: DNIB.com Announces 359.8 Million Domain Name Registrations in the Fourth Quarter of 2023 https://blog.verisign.com/domain-names/q4-2023-domain-name-industry-brief-quarterly-report/ Thu, 15 Feb 2024 13:28:39 +0000 https://blog.verisign.com/?p=6445 Today, the latest issue of The Domain Name Industry Brief Quarterly Report was released by DNIB.com, showing the fourth quarter of 2023 closed with 359.8 million domain name registrations across all top-level domains (TLDs), an increase of 0.6 million domain name registrations, or 0.2%, compared to the third quarter of 2023. Domain name registrations also […]

The post Domain Name Industry Brief Quarterly Report: DNIB.com Announces 359.8 Million Domain Name Registrations in the Fourth Quarter of 2023 appeared first on Verisign Blog.

]]>

Today, the latest issue of The Domain Name Industry Brief Quarterly Report was released by DNIB.com, showing the fourth quarter of 2023 closed with 359.8 million domain name registrations across all top-level domains (TLDs), an increase of 0.6 million domain name registrations, or 0.2%, compared to the third quarter of 2023. Domain name registrations also increased by 8.9 million, or 2.5%, year over year.

Check out the latest issue of The Domain Name Industry Brief Quarterly Report to see domain name stats from the fourth quarter of 2023, including:

  • Top 10 largest TLDs by number of reported domain names
  • Top 10 largest ccTLDs by number of reported domain names
  • ngTLDs as percentage of total TLDs
  • Geographical ngTLDs as percentage of total corresponding geographical TLDs

DNIB.com and The Domain Name Industry Brief Quarterly Report are sponsored by Verisign. To see past issues of the quarterly report, interactive dashboards and learn about DNIB.com’s statistical methodology, please visit DNIB.com.

The post Domain Name Industry Brief Quarterly Report: DNIB.com Announces 359.8 Million Domain Name Registrations in the Fourth Quarter of 2023 appeared first on Verisign Blog.

]]>
Verisign Provides Open Source Implementation of Merkle Tree Ladder Mode https://blog.verisign.com/security/open-source-merkle-tree-ladder-mode/ Thu, 04 Jan 2024 15:01:57 +0000 https://blog.verisign.com/?p=6429 A digital blue tree on a gradient blue background.The quantum computing era is coming, and it will change everything about how the world connects online. While quantum computing will yield tremendous benefits, it will also create new risks, so it’s essential that we prepare our critical internet infrastructure for what’s to come. That’s why we’re so pleased to share our latest efforts in […]

The post Verisign Provides Open Source Implementation of Merkle Tree Ladder Mode appeared first on Verisign Blog.

]]>

The quantum computing era is coming, and it will change everything about how the world connects online. While quantum computing will yield tremendous benefits, it will also create new risks, so it’s essential that we prepare our critical internet infrastructure for what’s to come. That’s why we’re so pleased to share our latest efforts in this area, including technology that we’re making available as an open source implementation to help internet operators worldwide prepare.

In recent years, the research team here at Verisign has been focused on a future where quantum computing is a reality, and where the general best practices and guidelines of traditional cryptography are re-imagined. As part of that work, we’ve made three further contributions to help the DNS community prepare for these changes:

  • an open source implementation of our Internet-Draft (I-D) on Merkle Tree Ladder (MTL) mode;
  • a new I-D on using MTL mode signatures with DNS Security Extensions (DNSSEC); and
  • an expansion of our previously announced public license terms to include royalty-free terms for implementing and using MTL mode if the I-Ds are published as Experimental, Informational, or Standards Track Requests for Comments (RFCs). (See the MTL mode I-D IPR declaration and the MTL mode for DNSSEC I-D IPR declaration for the official language.)

About MTL Mode

First, a brief refresher on what MTL mode is and what it accomplishes:

MTL mode is a technique developed by Verisign researchers that can reduce the operational impact of a signature scheme when authenticating an evolving series of messages. Rather than signing messages individually, MTL mode signs structures called Merkle tree ladders that are derived from the messages to be authenticated. Individual messages are authenticated relative to a ladder using a Merkle tree authentication path, while ladders are authenticated relative to a public key of an underlying signature scheme using a digital signature. The size and computational cost of the underlying digital signatures can therefore be spread across multiple messages.

The reduction in operational impact achieved by MTL mode can be particularly beneficial when the mode is applied to a signature scheme that has a large signature size or computational cost in specific use cases, such as when post-quantum signature schemes are applied to DNSSEC.

Recently, Verisign Fellow Duane Wessels described how Verisign’s DNSSEC algorithm update — from RSA/SHA-256 (Algorithm 8) to ECDSA Curve P-256 with SHA-256 (Algorithm 13) — increases the security strength of DNSSEC signatures and reduces their size impact. The present update is a logical next step in the evolution of DNSSEC resiliency. In the future, it is possible that DNSSEC may utilize a post-quantum signature scheme. Among the new post-quantum signature schemes currently being standardized, though, there is a shortcoming; if we were to directly apply these schemes to DNSSEC, it would significantly increase the size of the signatures1. With our work on MTL mode, the researchers at Verisign have provided a way to achieve the security benefit of a post-quantum algorithm rollover in a way that mitigates the size impact.

Put simply, this means that in a quantum environment, the MTL mode of operation developed by Verisign will enable internet infrastructure operators to use the longer signatures they will need to protect communications from quantum attacks, while still supporting the speed and space efficiency we’ve come to expect.

For more background information on MTL mode and how it works, see my July 2023 blog post, the MTL mode I-D, or the research paper, “Merkle Tree Ladder Mode: Reducing the Size Impact of NIST PQC Signature Algorithms in Practice.”

Recent Standardization Efforts

In my July 2023 blog post titled “Next Steps in Preparing for Post-Quantum DNSSEC,” I described two recent contributions by Verisign to help the DNS community prepare for a post-quantum world: the MTL mode I-D and a public, royalty-free license to certain intellectual property related to that I-D. These activities set the stage for the latest contributions I’m announcing in this post today.

Our Latest Contributions

  • Open source implementation. Like the I-D we published in July of this year, the open source implementation focuses on applying MTL mode to the SPHINCS+ signature scheme currently being standardized in FIPS 205 as SLH-DSA (Stateless Hash-Based Digital Signature Algorithm) by the National Institute of Standards and Technology (NIST). We chose SPHINCS+ because it is the most conservative of NIST’s post-quantum signature algorithms from a cryptographic perspective, being hash-based and stateless. We remain open to adding other post-quantum signature schemes to the I-D and to the open source implementation.
    We encourage developers to try out the open source implementation of MTL mode, which we introduced at the IETF 118 Hackathon, as the community’s experience will help improve the understanding of MTL mode and its applications, and thereby facilitate its standardization. We are interested in feedback both on whether MTL mode is effective in reducing the size impact of post-quantum signatures on DNSSEC and other use cases, and on the open source implementation itself. We are particularly interested in the community’s input on what language bindings would be useful and on which cryptographic libraries we should support initially. The open source implementation can be found on GitHub at: https://github.com/verisign/MTL
  • MTL mode for DNSSEC I-D. This specification describes how to use MTL mode signatures with DNSSEC, including DNSKEY and RRSIG record formats. The I-D also provides initial guidance for DNSSEC key creation, signature generation, and signature verification in MTL mode. We consider the I-D as an example of the kinds of contributions that can help to address the “Research Agenda for a Post-Quantum DNSSEC,” the subject of another I-D recently co-authored by Verisign. We expect to continue to update this I-D based on community feedback. While our primary focus is on the DNSSEC use case, we are also open to collaborating on other applications of MTL mode.
  • Expanded patent license. Verisign previously announced a public, royalty-free license to certain intellectual property related to the MTL mode I-D that we published in July 2023. With the availability of the open source implementation and the MTL mode for DNSSEC specification, the company has expanded its public license terms to include royalty-free terms for implementing and using MTL mode if the I-D is published as an Experimental, Informational, or Standards Track RFC. In addition, the company has made a similar license grant for the use of MTL mode with DNSSEC. See the MTL mode I-D IPR declaration and the MTL mode for DNSSEC I-D IPR declaration for the official language.

Verisign is grateful for the DNS community’s interest in this area, and we are pleased to serve as stewards of the internet when it comes to developing new technology that can help the internet grow and thrive. Our work on MTL mode is one of the longer-term efforts supporting our mission to enhance the security, stability, and resiliency of the global DNS. We’re encouraged by the progress that has been achieved, and we look forward to further collaborations as we prepare for a post-quantum future.

Footnotes

  1. While it’s possible that other post-quantum algorithms could be standardized that don’t have large signatures, they wouldn’t have been studied for as long. Indeed, our preferred approach for long-term resilience of DNSSEC is to use the most conservative of the post-quantum signature algorithms, which also happens to have the largest signatures. By making that choice practical, we’ll have a solution in place whether or not a post-quantum algorithm with a smaller signature size is eventually available. ↩

The post Verisign Provides Open Source Implementation of Merkle Tree Ladder Mode appeared first on Verisign Blog.

]]>
Domain Name Industry Brief Quarterly Report: DNIB.com Announces 359.3 Million Domain Name Registrations in the Third Quarter of 2023 https://blog.verisign.com/domain-names/q3-2023-domain-name-industry-brief-quarterly-report/ Wed, 15 Nov 2023 21:30:52 +0000 https://blog.verisign.com/?p=6422 Today, the latest issue of The Domain Name Industry Brief Quarterly Report was released by DNIB.com, showing the third quarter of 2023 closed with 359.3 million domain name registrations across all top-level domains (TLDs), an increase of 2.7 million domain name registrations, or 0.8%, compared to the second quarter of 2023. Domain name registrations also […]

The post Domain Name Industry Brief Quarterly Report: DNIB.com Announces 359.3 Million Domain Name Registrations in the Third Quarter of 2023 appeared first on Verisign Blog.

]]>

Today, the latest issue of The Domain Name Industry Brief Quarterly Report was released by DNIB.com, showing the third quarter of 2023 closed with 359.3 million domain name registrations across all top-level domains (TLDs), an increase of 2.7 million domain name registrations, or 0.8%, compared to the second quarter of 2023. Domain name registrations also increased by 8.5 million, or 2.4%, year over year.

Check out the latest issue of The Domain Name Industry Brief Quarterly Report to see domain name stats from the third quarter of 2023, including:

  • Top 10 largest TLDs by number of reported domain names
  • Top 10 largest ccTLDs by number of reported domain names
  • ngTLDs as percentage of total TLDs
  • Geographical ngTLDs as percentage of total corresponding geographical TLDs

DNIB.com and The Domain Name Industry Brief Quarterly Report are sponsored by Verisign. To see past issues of the quarterly report, interactive dashboards, and learn about DNIB.com’s statistical methodology, please visit DNIB.com.

The post Domain Name Industry Brief Quarterly Report: DNIB.com Announces 359.3 Million Domain Name Registrations in the Third Quarter of 2023 appeared first on Verisign Blog.

]]>
Verisign Celebrates Hispanic Heritage Month https://blog.verisign.com/security/celebrating-hispanic-heritage-month/ Fri, 22 Sep 2023 14:28:41 +0000 https://blog.verisign.com/?p=6402 Photographs of three Hispanic Verisign employees on a dark purple background.Celebrating National Hispanic Heritage Month reminds us how the wide range of perspectives and experiences among our employees makes us stronger both as a company and as a steward of the internet. In honor of this month, we are proud to recognize the stories of three of our Hispanic employees, and the positive impact they […]

The post Verisign Celebrates Hispanic Heritage Month appeared first on Verisign Blog.

]]>

Celebrating National Hispanic Heritage Month reminds us how the wide range of perspectives and experiences among our employees makes us stronger both as a company and as a steward of the internet. In honor of this month, we are proud to recognize the stories of three of our Hispanic employees, and the positive impact they make at Verisign.

Carlos Ruesta

As Verisign’s director of information security, Carlos Ruesta draws inspiration from his father’s community commitment as an agricultural engineer in Peru, working to bring safe food and water to isolated communities. His father’s experiences inform Carlos’ belief in Verisign’s mission of enabling the world to connect online with reliability and confidence, anytime, anywhere and motivates his work as part of a team that ensures trust.

As a leader in our security compliance division, Carlos ensures that his team maintains a robust governance, risk, and compliance framework, translating applicable laws and regulations into security control requirements. “Being part of a team that emphasizes trust, motivates me,” he said. “Management trusts me to make decisions affecting large-scale projects that protect our company. This allows me to use my problem-solving skills and leadership abilities.”

Carlos commends Verisign’s respectful and encouraging environment, which he considers vital in cultivating successful career paths for newcomers navigating the cybersecurity field. He says by recognizing individual contributions and supporting each other’s professional growth, Hispanic employees at Verisign feel a sense of belonging in the workplace and are able to excel in their career journeys.

Alejandro Gonzalez Roman

Alejandro Gonzalez Roman, a senior UX designer at Verisign, combines his artistic talent with technical expertise in his role, collaborating among various departments across Verisign. “My dad is an artist, and still one of my biggest role models,” he said. “He taught me that to be good at anything means to dedicate a lot of time to perfecting your craft. I see art as a way to inspire people to make the world a better place. In my job as a UX designer, I use art to make life a little easier for people.”

As a UX designer, Alejandro strives to make technology accessible to everyone, regardless of background or abilities. He believes that life experiences and cultural knowledge provide individuals with a unique perspective, which he considers an invaluable source of inspiration when designing. And with the Hispanic population being one of the largest minorities in the United States, cultural knowledge is crucial. Understanding how different people interact with technology and integrating cultural insights into the work is essential to good UX design.

Overall, Alejandro is motivated by the strong sense of teamwork at Verisign. “Day-to-day work with our strong team has helped me improve my work” he said. “With collaboration and encouragement, we push each other to be better UX designers. I couldn’t succeed as I have without this amazing team around me.”

Rebecca Bustamante

Rebecca Bustamante, senior manager of operations analysis, says Verisign’s “people-first” culture is part of her motivation, and she is grateful for the opportunities that allowed her to take on different roles within the company to learn and broaden her skills. “I’ve had opportunities because people believed in my potential and saw my work ethic,” she said. “These experiences have given me the understanding and skills to succeed at the job I have today.”

One of these experiences was joining the WIT@Verisign (Women in Technology) leadership team, which proved instrumental to her personal growth and led to valuable work friendships. In fact, one of her most cherished memories at Verisign includes leading a Verisign Cares team project in Virginia’s Great Falls Park, where she and her coworkers worked together to clear invasive plants and renovate walking paths.

Rebecca sees this type of camaraderie among employees as a crucial part of the people-first culture at Verisign. She particularly commends Verisign’s team leaders who value consistent communication and take the time to listen to people’s stories, which fosters an authentic understanding. This approach makes collaboration more natural and allows teamwork to develop organically. Rebecca emphasizes the significance of celebrating her culture, as it directly influences her job performance and effective communication. But she pointed out that the term “Hispanic” encompasses a wide diversity of peoples and nations. She advocates respect, practices active listening, and promotes a culture celebrating each other’s successes.

Joining the Verisign Team

These three individuals – as well as their many team members – contribute to Verisign’s efforts to enable and enhance the security, stability, and resiliency of key internet infrastructure every single day.

At Verisign, we recognize the importance of talent and culture in driving an environment that fosters high performance, inclusion, and integrity in all aspects of our work. It’s why recruiting and retaining the very best talent is our continual focus. If you would like to be part of the Verisign Team, please visit Verisign Careers.

The post Verisign Celebrates Hispanic Heritage Month appeared first on Verisign Blog.

]]>
Domain Name Industry Brief Quarterly Report: DNIB.com announces 356.6 Million Domain Name Registrations in the Second Quarter of 2023 https://blog.verisign.com/domain-names/q2-2023-domain-name-industry-brief-quarterly-report/ Thu, 07 Sep 2023 20:36:58 +0000 https://blog.verisign.com/?p=6384 Today, the latest issue of The Domain Name Industry Brief Quarterly Report was released by DNIB.com, showing the second quarter of 2023 closed with 356.6 million domain name registrations across all top-level domains (TLDs), an increase of 1.7 million domain name registrations, or 0.5%, compared to the first quarter of 2023. Domain name registrations also […]

The post Domain Name Industry Brief Quarterly Report: DNIB.com announces 356.6 Million Domain Name Registrations in the Second Quarter of 2023 appeared first on Verisign Blog.

]]>

Today, the latest issue of The Domain Name Industry Brief Quarterly Report was released by DNIB.com, showing the second quarter of 2023 closed with 356.6 million domain name registrations across all top-level domains (TLDs), an increase of 1.7 million domain name registrations, or 0.5%, compared to the first quarter of 2023. Domain name registrations also increased by 4.3 million, or 1.2%, year over year.


Check out the latest issue of The Domain Name Industry Brief Quarterly Report to see domain name stats from the second quarter of 2023, including:

  • Top 10 largest TLDs by number of reported domain names
  • Top 10 largest ccTLDs by number of reported domain names
  • ngTLDs as percentage of total TLDs
  • Geographical ngTLDs as percentage of total corresponding geographical TLDs

With the launch of the DNIB.com dashboards, 16 additional TLDs have been included in applicable calculations. The applicable current and historical data presented in this edition of the quarterly report have been adjusted accordingly, and applicable quarterly and year-over-year trends have been calculated using those adjusted figures. More information is available at DNIB.com.

DNIB.com and the Domain Name Industry Brief Quarterly Report are sponsored by Verisign. To see past issues of the quarterly report, interactive dashboards, and learn about DNIB.com’s statistical methodology, please visit DNIB.com.

The post Domain Name Industry Brief Quarterly Report: DNIB.com announces 356.6 Million Domain Name Registrations in the Second Quarter of 2023 appeared first on Verisign Blog.

]]>
Verisign Will Help Strengthen Security with DNSSEC Algorithm Update https://blog.verisign.com/security/dnssec-algorithm-update/ Thu, 10 Aug 2023 19:44:02 +0000 https://blog.verisign.com/?p=6371 abstract blue data stream on black backgroundAs part of Verisign’s ongoing effort to make global internet infrastructure more secure, stable, and resilient, we will soon make an important technology update to how we protect the top-level domains (TLDs) we operate. The vast majority of internet users won’t notice any difference, but the update will support enhanced security for several Verisign-operated TLDs […]

The post Verisign Will Help Strengthen Security with DNSSEC Algorithm Update appeared first on Verisign Blog.

]]>

As part of Verisign’s ongoing effort to make global internet infrastructure more secure, stable, and resilient, we will soon make an important technology update to how we protect the top-level domains (TLDs) we operate. The vast majority of internet users won’t notice any difference, but the update will support enhanced security for several Verisign-operated TLDs and pave the way for broader adoption and the next era of Domain Name System (DNS) security measures.

Beginning in the next few months and continuing through the end of 2023, we will upgrade the algorithm we use to sign domain names in the .com, .net, and .edu zones with Domain Name System Security Extensions (DNSSEC).

In this blog, we’ll outline the details of the upcoming change and what members of the DNS technical community need to know.

DNSSEC Adoption

DNSSEC provides data authentication security to DNS responses. It does this by ensuring any altered data can be detected and blocked, thereby preserving the integrity of DNS data. Think of it as a chain of trust – one that helps avoid misdirection and allows users to trust that they have gotten to their intended online destination safely and securely.

Verisign has long been at the forefront of DNSSEC adoption. In 2010, a major milestone occurred when the Internet Corporation for Assigned Names and Numbers (ICANN) and Verisign signed the DNS root zone with DNSSEC. Shortly after, Verisign introduced DNSSEC to its TLDs, beginning with .edu in mid-2010, .net in late 2010, and .com in early 2011. Additional TLDs operated by Verisign were subsequently signed as well.

In the time since we signed our TLDs, we have worked continuously to help members of the internet ecosystem take advantage of DNSSEC. We do this through a wide range of activities, including publishing technical resources, leading educational sessions, and advocating for DNSSEC adoption in industry and technical forums.

Growth Over Time

Since the TLDs were first signed, we have observed two very distinct phases of growth in the number of signed second-level domains (SLDs).

The first growth phase occurred from 2012 to 2020. During that time, signed domains in the .com zone grew at about 0.1% of the base per year on average, reaching just over 1% by the end of 2020. In the .net zone, signed domains grew at about 0.1% of the base per year on average, reaching 1.2% by the end of 2020. These numbers demonstrated a slow but steady increase, which can be seen in Figure 1.

Line graph of the percent of .com and .net domain names with Delegation Signer (DS) records where the percent rises from 2010 through 2023.

Figure 1: A chart spanning 2010 through the present shows the number of .com and .net domain names with DS – or Delegation Signer – records. These records form a link in the DNSSEC chain-of-trust for signed domains, indicating an uptick in DNSSEC adoption among SLDs.

We’ve observed more pronounced growth in signed SLDs during the second growth phase, which began in 2020. This is largely due to a single registrar that enabled DNSSEC by default for their new registrations. For .com, the annual rate increased to 0.9% of the base, and for .net, it increased to 1.1% of the base. Currently, 4.2% of .com domains are signed and 5.1% of .net domains are signed. This accelerated growth is also visible in Figure 1.

As we look forward, Verisign anticipates continued growth in the number of domains signed with DNSSEC. To support continued adoption and help further secure the DNS, we’re planning to make one very important change.

Rolling the Algorithm

All Verisign TLDs are currently signed with DNSSEC algorithm 8, also known as RSA/SHA-256, as documented in our DNSSEC Practice Statements. Currently, we use a 2048-bit Key Signing Key (KSK), and 1280-bit Zone Signing Keys (ZSK). The RSA algorithm has served us (and the broader internet) well for many years, but we wanted to take the opportunity to implement more robust security measures while also making more efficient use of resources that support DNSSEC-signed domain names.

We are planning to transition to the Elliptic Curve Digital Signature Algorithm (ECDSA), specifically Curve P-256 with SHA-256, or algorithm number 13, which allows for smaller signatures and improved cryptographic strength. This smaller signature size has a secondary benefit, as well: any potential DDoS attacks will have less amplification as a result of the smaller signatures. This could help protect victims from bad actors and cybercriminals.

Support for DNSSEC signing and validation with ECDSA has been well-established by various managed DNS providers, 78 other TLDs, and nearly 10 million signed SLDs. Additionally, research performed by APNIC and NLnet Labs shows that ECDSA support in validating resolvers has increased significantly in recent years.

The Road to Algorithm 13

How did we get to this point? It took a lot of careful preparation and planning, but with internet stewardship at the forefront of our mission, we wanted to protect the DNS with the best technologies available to us. This means taking precise measures in everything we do, and this transition is no exception.

Initial Planning

Algorithm 13 was on our radar for several years before we officially kicked off the implementation process this year. As mentioned previously, the primary motivating properties were the smaller signature size, with each signature being 96 bytes smaller than our current RSA signatures (160 bytes vs. 64 bytes), and the improved cryptographic strength. This helps us plan for the future and prepare for a world where more domain names are signed with DNSSEC.

Testing

Each TLD will first implement the rollover to algorithm 13 in Verisign’s Operational Test & Evaluation (OT&E) environment prior to implementing the process in production, for a total of two rollovers per TLD. Combined, this will result in six total rollovers across the .com, .net, and .edu TLDs. Rollovers between the individual TLDs will be spaced out to avoid overlap where possible.

The algorithm rollover for each TLD will follow this sequence of events:

  1. Publish algorithm 13 ZSK signatures alongside algorithm 8 ZSK signatures
  2. Publish algorithm 13 DNSKEY records alongside algorithm 8 DNSKEY records
  3. Publish the algorithm 13 DS record in the root zone and stop publishing the algorithm 8 DS record
  4. Stop publishing algorithm 8 DNSKEY records
  5. Stop publishing algorithm 8 ZSK signatures

Only when a successful rollover has been done in OT&E will we begin the process in production.

Who is affected, and when is the change happening?

Now that we’ve given the background, we know you’re wondering: how might this affect me?

The change to a new DNSSEC-signing algorithm is expected to have no impact for the vast majority of internet users, service providers, and domain registrants. According to the aforementioned research by APNIC and NLnet Labs, most DNSSEC validators support ECDSA, and any that do not will simply ignore the signatures and still be able to resolve domains in Verisign-operated TLDs.

Regarding timing, we plan to begin to transition to ECDSA in the third and fourth quarters of this year. We will start the transition process with .edu, then .net, and then .com. We are currently aiming to have these three TLDs transitioned before the end of the fourth quarter 2023, but we will let the community know if our timeline shifts.

Conclusion

As leaders in DNSSEC adoption, this algorithm rollover demonstrates yet another critical step we are taking toward making the internet more secure, stable, and resilient. We look forward to enabling the change later this year, providing more efficient and stronger cryptographic security while optimizing resource utilization for DNSSEC-signed domain names.

The post Verisign Will Help Strengthen Security with DNSSEC Algorithm Update appeared first on Verisign Blog.

]]>
Next Steps in Preparing for Post-Quantum DNSSEC https://blog.verisign.com/security/post-quantum-dnssec-preparation/ Thu, 20 Jul 2023 14:51:17 +0000 https://blog.verisign.com/?p=6336 binary digits on a gradient blue backgroundIn 2021, we discussed a potential future shift from established public-key algorithms to so-called “post-quantum” algorithms, which may help protect sensitive information after the advent of quantum computers. We also shared some of our initial research on how to apply these algorithms to the Domain Name System Security Extensions, or DNSSEC. In the time since […]

The post Next Steps in Preparing for Post-Quantum DNSSEC appeared first on Verisign Blog.

]]>

In 2021, we discussed a potential future shift from established public-key algorithms to so-called “post-quantum” algorithms, which may help protect sensitive information after the advent of quantum computers. We also shared some of our initial research on how to apply these algorithms to the Domain Name System Security Extensions, or DNSSEC. In the time since that blog post, we’ve continued to explore ways to address the potential operational impact of post-quantum algorithms on DNSSEC, while also closely tracking industry research and advances in this area.

Now, significant activities are underway that are setting the timeline for the availability and adoption of post-quantum algorithms. Since DNS participants – including registries and registrars – use public key-cryptography in a number of their systems, these systems may all eventually need to be updated to use the new post-quantum algorithms. We also announce two major contributions that Verisign has made in support of standardizing this technology: an Internet-Draft as well as a public, royalty-free license to certain intellectual property related to that Internet-Draft.

In this blog post, we review the changes that are on the horizon and what they mean for the DNS ecosystem, and one way we are proposing to ease the implementation of post-quantum signatures – Merkle Tree Ladder mode.

By taking these actions, we aim to be better prepared (while also helping others prepare) for a future where cryptanalytically relevant quantum computing and post-quantum cryptography become a reality.

Recent Developments

In July 2022, the National Institute of Standards and Technology (NIST) selected one post-quantum encryption algorithm and three post-quantum signature algorithms for standardization, with standards for these algorithms arriving as early as 2024. In line with this work, the Internet Engineering Task Force (IETF) has also started standards development activities on applying post-quantum algorithms to internet protocols in various working groups, including the newly formed Post-Quantum Use in Protocols (PQUIP) working group. And finally, the National Security Agency (NSA) recently announced that National Security Systems are expected to transition to post-quantum algorithms by 2035.

Collectively, these announcements and activities indicate that many organizations are envisioning a (post-)quantum future, across many protocols. Verisign’s main concern continues to be how post-quantum cryptography impacts the DNS, and in particular, how post-quantum signature algorithms impact DNSSEC.

DNSSEC Considerations

The standards being developed in the next few years are likely to be the ones deployed when the post-quantum transition eventually takes place, so now is the time to take operational requirements for specific protocols into account.

For DNSSEC, the operational concerns are twofold.

First, the large signature sizes of current post-quantum signatures selected by NIST would result in DNSSEC responses that exceed the size limits of the User Datagram Protocol, which is broadly deployed in the DNS ecosystem. While the Transmission Control Protocol and other transports are available, the additional overhead of having large post-quantum signatures on every response — which can be one to two orders of magnitude as long as traditional signatures —introduces operational risk to the DNS ecosystem that would be preferable to avoid.

Second, the large signatures would significantly increase memory requirements for resolvers using in-memory caches and authoritative nameservers using in-memory databases.

Bar graph of the size impact of traditional and post-quantum signature size where a zone fully signed with SPHINCS+ would be about 50 times the size of a zone fully signed with ECDSA.
Figure 1: Size impact of traditional and post-quantum signature size impact on a fully signed DNS zone. Horizontal bars show percentage of zone that would be signature for two traditional and two post-quantum algorithms; vertical bars show the percentage increase in the zone size due to signature data.

Figure 1, from Andy Fregly’s recent presentation at OARC 40, shows the impact on a fully signed DNS zone where, on average, there are 2.2 digital signatures per resource record set (covering both existence and non-existence proofs). The horizontal bars show the percentage of the zone file that would be comprised of signature data for the two prevalent current algorithms, RSA and ECDSA, and for the smallest and largest of the NIST PQC algorithms. At the low and high end of these examples, signatures with ECDSA would take up 40% of the zone and SPHINCS+ signatures would take up over 99% of the zone. The vertical bars give the percentage size increase of the zone file due to signatures. Again, comparing the low and high end, a zone fully signed with SPHINCS+ would be about 50 times the size of a zone fully signed with ECDSA.

Merkle Tree Ladder Mode: Reducing Size Impact of Post-Quantum Signatures

In his 1988 article, “The First Ten Years of Public-Key Cryptography,” Whitfield Diffie, co-discoverer of public-key cryptography, commented on the lack of progress in finding public-key encryption algorithms that were as fast as the symmetric-key algorithms of the day: “Theorems or not, it seemed silly to expect that adding a major new criterion to the requirements of a cryptographic system could fail to slow it down.”

Diffie’s counsel also appears relevant to the search for post-quantum algorithms: It would similarly be surprising if adding the “major new criterion” of post-quantum security to the requirements of a digital signature algorithm didn’t impact performance in some way. Signature size may well be the tradeoff for post-quantum security, at least for now.

With this tradeoff in mind, Verisign’s post-quantum research team has explored ways to address the size impact, particularly to DNSSEC, arriving at a construction we call a Merkle Tree Ladder (MTL), a generalization of a single-rooted Merkle tree (see Figure 2). We have also defined a technique that we call the Merkle Tree Ladder mode of operation for using the construction with an underlying signature algorithm.

Diagram showing an example of a Merkle tree ladder.
Figure 2: A Merkle Tree Ladder consists of one or more “rungs” that authenticate or “cover” the leaves of a generalized Merkle tree. In this example, rungs 19:19, 17:18, and 1:16 are the collectively the ancestors of all 19 leaves of the tree and therefore cover them. The values of the nodes are the hash of the values of their children, providing cryptographic protection. A Merkle authentication path consisting of sibling nodes authenticates a leaf node relative to the ladder e.g., leaf node 7 (corresponding to message 7 beneath) can be authenticated relative to rung 1:16 by rehashing it with the sibling nodes along the path 8, 5:6, 1:4 and 9:16. If the verifier already has a previous ladder that covers a message, the verifier can instead rehash relative to that ladder, e.g., leaf node 7 can be verified relative to rung 1:8 using sibling nodes 8, 5:6 and 1:4.

Similar to current deployments of public-key cryptography, MTL mode combines processes with complementary properties to balance performance and other criteria (see Table 1). In particular, in MTL mode, rather than signing individual messages with a post-quantum signature algorithm, ladders comprised of one or more Merkle tree nodes are signed using the post-quantum algorithm. Individual messages are then authenticated relative to the ladders using Merkle authentication paths.

Criterion to AchieveInitial Design with a Single Process Improved Design Combining Complementary ProcessesBenefit
Public-Key Property for Encryption– Encrypt Individual Messages with Public-Key Algorithm– Establish Symmetric Keys Using Public-Key Algorithm
– Encrypt Multiple Messages Using Each Symmetric Key
– Amortize Cost of Public-Key Operations Across Multiple Messages
Post-Quantum Property for Signatures– Sign Individual Messages with Post-Quantum Algorithm– Sign Merkle Tree Ladders using Post-Quantum Algorithm
– Authenticate Multiple Messages Relative to Each Signed Ladder
– Amortize Size of Post-Quantum Signature Across Multiple Messages
Table 1: Speed concerns for traditional public-key algorithms were addressed by combining them with symmetric-key algorithms (for instance, as outlined in early specifications for Internet Privacy-Enhanced Mail). Size concerns for emerging post-quantum signature algorithms can potentially be addressed by combining them with constructions such as Merkle Tree Ladders.

Although the signatures on the ladders might be relatively large, the ladders and their signatures are sent infrequently. In contrast, the Merkle authentication paths that are sent for each message are relatively short. The combination of the two processes maintains the post-quantum property while amortizing the size impact of the signatures across multiple messages. (Merkle tree constructions, being based on hash functions, are naturally post-quantum.)

The two-part approach for public-key algorithms has worked well in practice. In Transport Layer Security, symmetric keys are established in occasional handshake operations, which may be more expensive. The symmetric keys are then used to encrypt multiple messages within a session without further overhead for key establishment. (They can also be used to start a new session).

We expect that a two-part approach for post-quantum signatures can similarly work well in an application like DNSSEC where verifiers are interested in authenticating a subset of messages from a large, evolving message series (e.g., DNS records).

In such applications, signed Merkle Tree Ladders covering a range of messages in the evolving series can be provided to a verifier occasionally. Verifiers can then authenticate messages relative to the ladders, given just a short Merkle authentication path.

Importantly, due to a property of Merkle authentication paths called backward compatibility, all verifiers can be given the same authentication path relative to the signer’s current ladder. This also helps with deployment in applications such as DNSSEC, since the authentication path can be published in place of a traditional signature. An individual verifier may verify the authentication path as long as the verifier has a previously signed ladder covering the message of interest. If not, then the verifier just needs to get the current ladder.

As reported in our presentation on MTL mode at the RSA Conference Cryptographers’ Track in April 2023, our initial evaluation of the expected frequency of requests for MTL mode signed ladders in DNSSEC is promising, suggesting that a significant reduction in effective signature size impact can be achieved.

Verisign’s Contributions to Standardization

To facilitate more public evaluation of MTL mode, Verisign’s post-quantum research team last week published the Internet-Draft “Merkle Tree Ladder Mode (MTL) Signatures.” The draft provides the first detailed, interoperable specification for applying MTL mode to a signature scheme, with SPHINCS+ as an initial example.

We chose SPHINCS+ because it is the most conservative of the NIST PQC algorithms from a cryptographic perspective, being hash-based and stateless. It is arguably most suited to be one of the algorithms in a long-term deployment of a critical infrastructure service like DNSSEC. With this focus, the specification has a “SPHINCS+-friendly” style. Implementers familiar with SPHINCS+ will find similar notation and constructions as well as common hash function instantiations. We are open to adding other post-quantum signature schemes to the draft or other drafts in the future.

Publishing the Internet-Draft is a first step toward the goal of standardizing a mode of operation that can reduce the size impact of post-quantum signature algorithms.

In support of this goal, Verisign also announced this week a public, royalty-free license to certain intellectual property related to the Internet-Draft published last week. Similar to other intellectual property rights declarations the company has made, we have announced a “Standards Development Grant” which provides the listed intellectual property under royalty-free terms for the purpose of facilitating standardization of the Internet-Draft we published on July 10, 2023. (The IPR declaration gives the official language.)

We expect to release an open-source implementation of the Internet-Draft soon, and, later this year, to publish an Internet-Draft on using MTL mode signatures in DNSSEC.

With these contributions, we invite implementers to take part in the next step toward standardization: evaluating initial versions of MTL mode to confirm whether they indeed provide practical advantages in specific use cases.

Conclusion

DNSSEC continues to be an important part of the internet’s infrastructure, providing cryptographic verification of information associated with the unique, stable identifiers in this ubiquitous namespace. That is why preparing for an eventual transition to post-quantum algorithms for DNSSEC has been and continues to be a key research and development activity at Verisign, as evidenced by our work on MTL mode and post-quantum DNSSEC more generally.

Our goal is that with a technique like MTL mode in place, protocols like DNSSEC can preserve the security characteristics of a pre-quantum environment while minimizing the operational impact of larger signatures in a post-quantum world.

In a later blog post, we’ll share more details on some upcoming changes to DNSSEC, and how these changes will provide both security and operational benefits to DNSSEC in the near term.

Verisign plans to continue to invest in research and standards development in this area, as we help prepare for a post-quantum future.

The post Next Steps in Preparing for Post-Quantum DNSSEC appeared first on Verisign Blog.

]]>
Announcing the Launch of DNIB.com, a New Source for DNS News, Information, Research, and Analysis https://blog.verisign.com/domain-names/announcing-launch-dnib-website/ Thu, 22 Jun 2023 20:30:30 +0000 https://blog.verisign.com/?p=6327 Verisign today announced the launch of DNIB.com, the new Domain Name Industry Brief (DNIB) website. Sponsored by Verisign, DNIB.com is a source for insights and analysis from subject-matter experts on key topics relevant to the global Domain Name System (DNS). DNIB.com will offer insight on policy, governance, technology, security, and business trends relevant to analysts, […]

The post Announcing the Launch of DNIB.com, a New Source for DNS News, Information, Research, and Analysis appeared first on Verisign Blog.

]]>

Verisign today announced the launch of DNIB.com, the new Domain Name Industry Brief (DNIB) website.

Sponsored by Verisign, DNIB.com is a source for insights and analysis from subject-matter experts on key topics relevant to the global Domain Name System (DNS). DNIB.com will offer insight on policy, governance, technology, security, and business trends relevant to analysts, entrepreneurs, policymakers, and anyone with an interest in the DNS. The website features a collection of new, searchable, and interactive dashboards tracking relevant DNS data and trends, that is designed to be a valuable day-to-day resource for industry stakeholders, and anyone interested in learning more about global domain name operations.

DNIB.com is also the new home of the DNIB quarterly report, which Verisign has published for more than a decade, providing a trusted and valued resource for stakeholders across the globe seeking to understand the dynamism and trends of the domain name industry.

The report will be published each quarter at DNIB.com, summarizing the state of the domain name industry through a variety of statistical and analytical research. The new and expanded DNIB.com dashboards take that statistical data to the next level, enabling exploration of trend data across the industry, providing additional history and depth, and offering expert insights and commentary.

The post Announcing the Launch of DNIB.com, a New Source for DNS News, Information, Research, and Analysis appeared first on Verisign Blog.

]]>
Verisign Domain Name Industry Brief: 354.0 Million Domain Name Registrations in the First Quarter of 2023 https://blog.verisign.com/domain-names/verisign-q1-2023-the-domain-name-industry-brief/ Thu, 08 Jun 2023 21:04:50 +0000 https://blog.verisign.com/?p=6322 DNIB-Q1-23Today, we released the latest issue of The Domain Name Industry Brief, which shows that the first quarter of 2023 closed with 354.0 million domain name registrations across all top-level domains (TLDs), an increase of 3.5 million domain name registrations, or 1.0%, compared to the fourth quarter of 2022.1,2 Domain name registrations also increased by […]

The post Verisign Domain Name Industry Brief: 354.0 Million Domain Name Registrations in the First Quarter of 2023 appeared first on Verisign Blog.

]]>

Today, we released the latest issue of The Domain Name Industry Brief, which shows that the first quarter of 2023 closed with 354.0 million domain name registrations across all top-level domains (TLDs), an increase of 3.5 million domain name registrations, or 1.0%, compared to the fourth quarter of 2022.1,2 Domain name registrations also increased by 3.5 million, or 1.0%, year over year.1,2

Check out the latest issue of The Domain Name Industry Brief to see domain name stats from the first quarter of 2023, including:

This issue of the Domain Name Industry Brief includes a correction to the March 2023 issue, which incorrectly reported the number of domain name registrations in the .eu ccTLD.2 This was the result of a one-time error in the .eu domain name registration data, provided by ZookNIC, which has since been resolved.

To see past issues of The Domain Name Industry Brief, please visit https://verisign.com/dnibarchives.

  1. All figure(s) exclude domain names in the .tk, .cf, .ga, .gq, and .ml ccTLDs. Quarterly and year-over-year trends have been calculated relative to historical figures that have also been adjusted to exclude these five ccTLDs. For further information, please see the Editor’s Note contained in Vol. 19, Issue 1 of The Domain Name Industry Brief.
  2. The generic TLD, ngTLD and ccTLD data cited in the brief: (i) includes ccTLD internationalized domain names, (ii) is an estimate as of the time this brief was developed and (iii) is subject to change as more complete data is received. Some numbers in the brief may reflect standard rounding.

The post Verisign Domain Name Industry Brief: 354.0 Million Domain Name Registrations in the First Quarter of 2023 appeared first on Verisign Blog.

]]>
Building a More Secure Routing System: Verisign’s Path to RPKI https://blog.verisign.com/security/verisign-rpki-adoption/ Mon, 05 Jun 2023 17:54:02 +0000 https://blog.verisign.com/?p=6310 abstract-technology-backgroundThis blog was co-authored by Verisign Distinguished Engineer Mike Hollyman and Verisign Director – Engineering Hasan Siddique. It is based on a lightning talk they gave at NANOG 87 in February 2023, the slides from which are available on the NANOG website. At Verisign, we believe that continuous improvements to the safety and security of […]

The post Building a More Secure Routing System: Verisign’s Path to RPKI appeared first on Verisign Blog.

]]>

This blog was co-authored by Verisign Distinguished Engineer Mike Hollyman and Verisign Director – Engineering Hasan Siddique. It is based on a lightning talk they gave at NANOG 87 in February 2023, the slides from which are available on the NANOG website.

At Verisign, we believe that continuous improvements to the safety and security of the global routing system are critical for the reliability of the internet. As such, we’ve recently embarked on a path to implement Resource Public Key Infrastructure (RPKI) within our technology ecosystem as a step toward building a more secure routing system. In this blog, we share our ongoing journey toward RPKI adoption and the lessons we’ve learned as an operator of critical internet infrastructure.

While RPKI is not a silver bullet for securing internet routing, practical adoption of RPKI can deliver significant benefits. This will be a journey of deliberate, measured, and incremental steps towards a larger goal, but we believe the end result will be more than worth it.

Why RPKI and why now?

Under the Border Gateway Protocol (BGP) – the internet’s de-facto inter-domain routing protocol for the last three decades – local routing policies decide where and how internet traffic flows, but each network independently applies its own policies on what actions it takes, if any, with data that connects through its network. For years, “routing by rumor” served the internet well; however, our growing dependence upon the global internet for sensitive and critical communications means that internet infrastructure merits a more robust approach for protecting routing information. Preventing route leaks, mis-originations, and hijacks is a first step.

Verisign was one of the first organizations to join the Mutually Agreed Norms for Routing Security (MANRS) Network Operator Program in 2017. Ever since the establishment of the program, facilitating routing information – via an Internet Routing Registry (IRR) or RPKI – has been one of the key “actions” of the MANRS program. Verisign has always been fully supportive of MANRS and its efforts to promote a culture of collective responsibility, collaboration, and coordination among network peers in the global internet routing system.

Just as RPKI creates new protections, it also brings new challenges. Mindful of those challenges, but committed to our mission of upholding the security, stability, and resiliency of the internet, Verisign is heading toward RPKI adoption.

Adopting RPKI ROV and External Dependencies

In his March 2022 blog titled “Routing Without Rumor: Securing the Internet’s Routing System,” Verisign EVP & CSO, Danny McPherson, discussed how “RPKI creates new external and third-party dependencies that, as adoption continues, ultimately replace the traditionally autonomous operation of the routing system with a more centralized model. If too tightly coupled to the routing system, these dependencies may impact the robustness and resilience of the internet itself.” McPherson’s blog also reviewed the importance of securing the global internet BGP routing system, including utilizing RPKI to help overcome the hurdles that BGP’s implicit trust model presents.

RPKI Route Origin Validation (ROV) is one critical step forward in securing the global BGP system to prevent mis-originations and errors from propagating invalid routing information worldwide. RPKI ROV helps move the needle towards a safer internet. However, just as McPherson pointed out, this comes at the expense of creating a new external dependency within the operational path of Verisign’s critical Domain Name System (DNS) services.

RPKI Speed Bumps

At NANOG 87, we shared our concerns on how systemic and circular dependencies must be acknowledged and mitigated, to the extent possible. The following are some concerns and potential risks related to RPKI:

  • RPKI has yet to reach the operational maturity of related, established routing protocols, such as BGP. BGP has been around for over 30 years, but comparatively, RPKI has been growing in the Internet Engineering Task Force (IETF) Secure Inter-Domain Routing Operations (SIDROPS) working group for only 12 years. Currently, RPKI Unique Prefix-Origin Pairs are seen for just over 40% of the global routing prefixes, and much of that growth has occurred only in the last four years. Additionally, as the RPKI system gains support, we see how it occasionally fails due to a lack of maturity. The good news is that the IETF is actively engaged in making improvements to the system, and it’s rewarding to see the progress being made.
  • Every organization deploying RPKI needs to understand the circular dependencies that may arise. For example, publishing a Route Origin Authorization (ROA) in the RPKI system requires the DNS. Additionally, there are over 20 publishing points in the RPKI system today with fully qualified domain names (FQDNs) in the .com and .net top-level domains (TLDs). All five of the Regional Internet Registries (RIRs) use the .net TLD for their RPKI infrastructure.
  • Adopting RPKI means taking on additional, complex responsibilities. Organizations that participate in RPKI inherit additional operational tasks for testing, publishing, and alerting of the RPKI system and ultimately operating net-new infrastructure; however, these 24/7 services are critical when it comes to supporting a system that relates to routing stability.
  • In order to adequately monitor RPKI deployment, ample resources are required. Real-time monitoring should be considered a basic requirement for both internal and external RPKI infrastructure. As such, organizations must allocate technical engineering resources and support services to meet this need.

Additional considerations include:

  • the shared fate dependency (i.e., when all prefixes are signed with ROAs)
  • long-term engineering support
  • operational integration of RPKI systems
  • operational experience of RIRs as they now run critical infrastructure to support RPKI
  • overclaiming with the RIR certification authorities
  • lack of transparency for operator ROV policies
  • inconsistency between open source RPKI validator development efforts
  • the future scale of RPKI

These items require careful consideration before implementing RPKI, not afterwards.

Managing Risks

To better manage potential risks in our journey towards RPKI adoption, we established “day zero” requirements. These included firm conditions that must be met before any further testing could occur, including monitoring data across multiple protocols, coupled with automated ROA/IRR provisioning.

The deliberate decision to take a measured approach has proved rewarding, leaving us better positioned to manage and maintain our data and critical RPKI systems.

Investing engineering cycles in building robust monitoring and automation has increased our awareness of trends and outages based on global and local observability. As a result, operations and support teams benefit from live training on how to respond to RPKI-related events. This has helped us improve operational readiness in response to incidents. Additionally, automation reduces the risk of human error and, when coupled with monitoring, introduces stronger guardrails throughout the provisioning process.

Balancing Our Mission with Adopting New Technology

Verisign’s core mission is to enable the world to connect online with reliability and confidence, anytime, anywhere. This means that as we adopt RPKI, we must adhere to strict design principles that don’t risk sacrificing the integrity and availability of DNS data.

Our path to RPKI adoption is just one example of how we continuously strive for improvement and implement new technology, all while ensuring we protect Verisign’s critical DNS services.

While there are obstacles ahead of us, at Verisign we strongly advocate for consistent, focused discipline and continuous improvement. This means our course is set – we are firmly moving toward RPKI adoption.

Conclusion

Our goal is to improve internet routing security programs through efforts such as technology implementation, industry engagement, standards development, open-source contributions, funding, and the identification of shared risks which need to be understood and managed appropriately.

Implementing RPKI at your own organization will require broad investment in your people, processes, and technology stack. At Verisign specifically, we have assigned resources to perform research, increased budgets, completed various risk management tasks, and allocated significant time to development and engineering cycles. While RPKI itself does not address all security issues, there are incremental steps we can collectively take toward building a more resilient internet routing security paradigm.

As stewards of the internet, we are implementing RPKI as the next step in strengthening the security of internet routing information. We look forward to sharing updates on our progress.

The post Building a More Secure Routing System: Verisign’s Path to RPKI appeared first on Verisign Blog.

]]>
Will Altanovo’s Maneuvering Continue to Delay .web? https://blog.verisign.com/domain-names/will-maneuvering-continue-to-delay-dot-web/ Tue, 16 May 2023 20:31:42 +0000 https://blog.verisign.com/?p=6301 Verisign LogoThe launch of .web top-level domain is once again at risk of being delayed by baseless procedural maneuvering. On May 2, the Internet Corporation for Assigned Names and Numbers (ICANN) Board of Directors posted a decision on the .web matter from its April 30 meeting, which found “that NDC (Nu Dotco LLC) did not violate […]

The post Will Altanovo’s Maneuvering Continue to Delay .web? appeared first on Verisign Blog.

]]>

The launch of .web top-level domain is once again at risk of being delayed by baseless procedural maneuvering.

On May 2, the Internet Corporation for Assigned Names and Numbers (ICANN) Board of Directors posted a decision on the .web matter from its April 30 meeting, which found “that NDC (Nu Dotco LLC) did not violate the Guidebook or the Auction Rules” and directed ICANN “to continue processing NDC’s .web application,” clearing the way for the delegation of .web. ICANN later posted a preliminary report from this meeting showing that the Board vote on the .web decision was without objection.

Less than 24 hours later, however, Altanovo (formerly Afilias) – a losing bidder whose repeatedly rejected claims already have delayed the delegation of .web for more than six years – dusted off its playbook from 2018 by filing yet another ICANN Cooperative Engagement Process (CEP), beginning the cycle of another independent review of the Board’s decision, which last time cost millions of dollars and resulted in years of delay.

Under ICANN rules, a CEP is intended to be a non-binding process designed to efficiently resolve or narrow disputes before the initiation of an Independent Review Process (IRP). ICANN places further actions on hold while a CEP is pending. It’s an important and worthwhile aspect of the multistakeholder process…when used in good faith.

But that does not appear to be what is happening here. Altanovo and its backers initiated this repeat CEP despite the fact that it lost a fair, ICANN-sponsored auction; lost, in every important respect, the IRP; lost its application for reconsideration of the IRP (which it was sanctioned for filing, and which was determined to be frivolous by the IRP panel); and has now lost before the ICANN Board.

The Board’s decision expressly found that these disputes “have delayed the delegation of .web for more than six years” and already cost each of the parties, including ICANN, “millions of dollars in legal fees.”

Further delay appears to be the only goal of this second CEP – and any follow-on IRP – because no one could conclude in good faith that an IRP panel would find that the thorough process and decision on .web established in the Board’s resolutions and preliminary report violated ICANN’s bylaws. At the end of the day, all that will be accomplished by this second CEP and a second IRP is continued delay, and delay for delay’s sake amounts to an abuse of process that threatens to undermine the multistakeholder processes and the rights of NDC and Verisign.

ICANN will, no doubt, follow its processes for resolving the CEP and any further procedural maneuvers attempted by Altanovo. But, given Altanovo’s track record of losses, delays, and frivolous maneuvering since the 2016 .web auction, a point has been reached when equity demands that this abuse of process not be allowed to thwart NDC’s right, as determined by the Board, to move ahead on its .web application.

The post Will Altanovo’s Maneuvering Continue to Delay .web? appeared first on Verisign Blog.

]]>
Verisign Honors Vets in Technology For Military Appreciation Month https://blog.verisign.com/security/verisign-honors-armed-forces/ Thu, 04 May 2023 12:52:15 +0000 https://blog.verisign.com/?p=6290 Verisign veterans american flagFor Murray Green, working for a company that is a steward of critical internet infrastructure is a mission that he can get behind. Green, a senior engineering manager at Verisign, is a U.S. Army veteran who served during Operation Desert Storm and sees stewardship as a lifelong mission. In both roles, he has stayed focused […]

The post Verisign Honors Vets in Technology For Military Appreciation Month appeared first on Verisign Blog.

]]>

For Murray Green, working for a company that is a steward of critical internet infrastructure is a mission that he can get behind. Green, a senior engineering manager at Verisign, is a U.S. Army veteran who served during Operation Desert Storm and sees stewardship as a lifelong mission. In both roles, he has stayed focused on the success of the mission and cultivating great teamwork.

Teamwork is something that Laura Street, a software engineer and U.S. Air Force veteran, came to appreciate through her military service. It was then that she learned to appreciate how people from different backgrounds can work together on missions by finding their commonalities.

While military and civilian roles are very different, Verisign appeals to many veterans because of the mission-driven nature of the work we do.

Green and Street are two of the many veterans who have chosen to apply their military experience in a civilian career at Verisign. Both say that the work is not only rewarding to them, but to anyone who depends on Verisign’s commitment in helping to maintain the security, stability, and resiliency of the Domain Name System (DNS) and the internet.

At Verisign, we celebrate Military Appreciation Month by paying tribute to those who have served and recognizing how fortunate we are to work alongside amazing veterans whose contributions to our work provide enormous value.

Introducing Data-Powered Technology

Before joining the military, Murray Green studied electrical engineering but soon realized that his true passion was computer science. Looking for a way to pay for school and explore and excel as a Programmer Analyst, he turned to the U.S. Army.

He served more than four years at the Walter Reed Army Medical Center in Washington as the sole programmer for military personnel, using a proprietary language to maintain a reporting system that supplied data analysis. It was a role that helped him recognize the importance of data to any mission – whether for the U.S. Army or a company like Verisign.

At Walter Reed, he helped usher in the age of client-server computing, which dramatically reduced data processing time. “Around this time, personal computers connected to mini servers were just coming online so, using this new technology, I was able to unload data from the mainframe and bring it down to minicomputers running programs locally, which resulted in tasks being completed without the wait times associated with conventional mainframe computing,” he said. “I was there at the right time.”

His work led him to receive the Meritorious Service Medal, recognizing his expertise in the proprietary programming language that was used to assist in preparation for Operation Desert Storm, the first mobilization of U.S. Army personnel since Vietnam.

In the military, he also came to understand the importance of leadership – “providing purpose, direction, and motivation to accomplish the mission and improve the organization.”

Green has been at Verisign for over 20 years, starting off in the registry side of the business. In that role, he helped maintain the .com/.net top level-domain (TLD) name database, which at the time, held 5 million domain names. Today, he still oversees this database, managing a highly skilled team that has helped provide uninterrupted resolution service for .com and .net for over a quarter of a century.

Sense of Teamwork Leaves a Lasting Impression

Street had been in medical school, looking for a way to pay for her continued education, when she heard about the military’s Health Professional Scholarship Program and turned to the U.S. Air Force.

“I met some terrific people in the military,” she said. “My favorite experiences involved working with people who cared about others and were able to motivate them with positivity.” But it was the sense of teamwork she encountered in the military that left a lasting impression.

“There’s a sense of accountability and concern for others,” she said. “You help one another.”

While working in the Education and Training department, she had been working with a support team to troubleshoot a video that wasn’t loading properly and was impressed with how the developers worked to fix the problem. She immediately took an interest in programming and enrolled in night classes at a local community college. After completing her service in the U.S. Air Force, she went back to school to pursue a bachelor’s degree in computer science.

She’s been at Verisign for two years and, while the job itself is rewarding because it taps into so many of her interests – from Java programming to network protection and packet analysis – it was the chemistry with the team that was most enticing about the role.

“I felt as at-ease as one can possibly feel during a technical interview,” she said. “I got the sense that these were people who I would want to work with.

Street credits the military for teaching her valuable communication and teamwork skills that she continues to apply in her role, which focuses on keeping the .com and .net top-level-domains available around the clock, around the world.

A Unique and Global Mission

Both Green and Street encourage service members to stay focused on the success of their personal missions and the teamwork they learned in the military, and to leverage those skills in the civilian world. Use your service as a selling point and understand that companies value that background more than you think, they said.

“Being proud of the service we provide to others and paying attention to details allows us at Verisign to make a global difference,” Green said. “The veterans on our team bring an incredible skillset that is highly valued here. I know that I’m a part of an incredible team at Verisign.”

Verisign is proud to create career opportunities where veterans can apply their military training. To learn more about our current openings, visit Verisign Careers.

The post Verisign Honors Vets in Technology For Military Appreciation Month appeared first on Verisign Blog.

]]>
Adding ZONEMD Protections to the Root Zone https://blog.verisign.com/security/root-zone-zonemd/ Tue, 18 Apr 2023 17:35:41 +0000 https://blog.verisign.com/?p=6286 blue-circuit-boardThe Domain Name System (DNS) root zone will soon be getting a new record type, called ZONEMD, to further ensure the security, stability, and resiliency of the global DNS in the face of emerging new approaches to DNS operation. While this change will be unnoticeable for the vast majority of DNS operators (such as registrars, […]

The post Adding ZONEMD Protections to the Root Zone appeared first on Verisign Blog.

]]>

The Domain Name System (DNS) root zone will soon be getting a new record type, called ZONEMD, to further ensure the security, stability, and resiliency of the global DNS in the face of emerging new approaches to DNS operation. While this change will be unnoticeable for the vast majority of DNS operators (such as registrars, internet service providers, and organizations), it provides a valuable additional layer of cryptographic security to ensure the reliability of root zone data.

In this blog, we’ll discuss these new proposals, as well as ZONEMD. We’ll share deployment plans, how they may affect certain users, and what DNS operators need to be aware of beforehand to ensure little-to-no disruptions.

The Root Server System

The DNS root zone is the starting point for most domain name lookups on the internet. The root zone contains delegations to nearly 1,500 top-level domains, such as .com, .net, .org, and many others. Since its inception in 1984, various organizations known collectively as the Root Server Operators have provided the service for what we now call the Root Server System (RSS). In this system, a myriad of servers respond to approximately 80 billion root zone queries each day.

While the RSS continues to perform this function with a high degree of dependability, there are recent proposals to use the root zone in a slightly different way. These proposals create some efficiencies for DNS operators, but they also introduce new challenges.

New Proposals

In 2020, the Internet Engineering Task Force (IETF) published RFC 8806, titled “Running a Root Server Local to a Resolver.” Along the same lines, in 2021 the Internet Corporation for Assigned Names and Numbers (ICANN) Office of the Chief Technology Officer published OCTO-027, titled “Hyperlocal Root Zone Technical Analysis.” Both proposals share the idea that recursive name servers can receive and load the entire root zone locally and respond to root zone queries directly.

But in a scenario where the entire root zone is made available to millions of recursive name servers, a new question arises: how can consumers of zone data verify that zone content has not been modified before reaching their systems?

One might imagine that DNS Security Extensions (DNSSEC) could help. However, while the root zone is indeed signed with DNSSEC, most of the records in the zone are considered non-authoritative (i.e., all the NS and glue records) and therefore do not have signatures. What about something like a Pretty Good Privacy (PGP) signature on the root zone file? That comes with its own challenge: in PGP, the detached signature is easily separated from the data. For example, there is no way to include a PGP signature over DNS zone transfer, and there is no easy way to know which version of the zone goes with the signature.

Introducing ZONEMD

A solution to this problem comes from RFC 8976. Led by Verisign and titled “Message Digest for DNS Zones” (known colloquially as ZONEMD), this protocol calls for a cryptographic digest of the zone data to be embedded into the zone itself. This ZONEMD record can then be signed and verified by consumers of the zone data. Here’s how it works:

Each time a zone is updated, the publisher calculates the ZONEMD record by sorting and canonicalizing all the records in the zone and providing them as input to a message digest function. Sorting and canonicalization are the same as for DNSSEC. In fact, the ZONEMD calculation can be performed at the same time the zone is signed. Digest calculation necessarily excludes the ZONEMD record itself, so the final step is to update the ZONEMD record and its signatures.

A recipient of a zone that includes a ZONEMD record repeats the same calculation and compares its calculated digest value with the published digest. If the zone is signed, then the recipient can also validate the correctness of the published digest. In this way, recipients can verify the authenticity of zone data before using it.

A number of open-source DNS software products now, or soon will, include support for ZONEMD verification. These include Unbound (version 1.13.2), NSD (version 4.3.4), Knot DNS (version 3.1.0), PowerDNS Recursor (version 4.7.0) and BIND (version 9.19).

Who Is Affected?

Verisign, ICANN, and the Root Server Operators are taking steps to ensure that the addition of the ZONEMD record in no way impacts the ability of the root server system to receive zone updates and to respond to queries. As a result, most internet users are not affected by this change.

Anyone using RFC 8806, or a similar technique to load root zone data into their local resolver, is unlikely to be affected as well. Software products that implement those features should be able to fully process a zone that includes the new record type, especially for reasons described below. Once the record has been added, users can take advantage of ZONEMD verification to ensure root zone data is authentic.

Users most likely to be affected are those that receive root zone data from the internic.net servers (or some other source) and use custom software to parse the zone file. Depending on how such custom software is designed, there is a possibility that it will treat the new ZONEMD record as unexpected and lead to an error condition. Key objectives of this blog post are to raise awareness of this change, provide ample time to address software issues, and minimize the likelihood of disruptions for such users.

Deployment Plan

In 2020, Verisign asked the Root Zone Evolution Review Committee (RZERC) to consider a proposal for adding data protections to the root zone using ZONEMD. In 2021, the RZERC published its recommendations in RZERC003. One of those recommendations was for Verisign and ICANN to develop a deployment plan and make the community aware of the plan’s details. That plan is summarized in the remainder of this blog post.

Phased Rollout

One attribute of a ZONEMD record is the choice of a hash algorithm used to create the digest. RFC 8976 defines two standard hash algorithms – SHA-384 and SHA-512 – and a range of “private-use” algorithms.

Initially, the root zone’s ZONEMD record will have a private-use hash algorithm. This allows us to first include the record in the zone without anyone worrying about the validity of the digest values. Since the hash algorithm is from the private-use range, a consumer of the zone data will not know how to calculate the digest value. A similar technique, known as the “Deliberately Unvalidatable Root Zone,” was utilized when DNSSEC was added to the root zone in 2010.

After a period of more than two months, the ZONEMD record will transition to a standard hash algorithm.

Hash Algorithm

SHA-384 has been selected for the initial implementation for compatibility reasons.

The developers of BIND implemented the ZONEMD protocol based on an early Internet-Draft, some time before it was published as an RFC. Unfortunately, the initial BIND implementation only accepts ZONEMD records with a digest length of 48 bytes (i.e., the SHA-384 length). Since the versions of BIND with this behavior are in widespread use today, use of the SHA-512 hash algorithm would likely lead to problems for many BIND installations, possibly including some Root Server Operators.

Presentation Format

Distribution of the zone between the Root Zone Maintainer and Root Server Operators primarily takes place via the DNS zone transfer protocol. In this protocol, zone data is transmitted in “wire format.”

The root zone is also stored and served as a file on the internic.net FTP and web servers. Here, the zone data is in “presentation format.” The ZONEMD record will appear in these files using its native presentation format. For example:

. 86400 IN ZONEMD 2021101902 1 1 ( 7d016e7badfd8b9edbfb515deebe7a866bf972104fa06fec
e85402cc4ce9b69bd0cbd652cec4956a0f206998bfb34483 )

Some users of zone data received from the FTP and web servers might currently be using software that does not recognize the ZONEMD presentation format. These users might experience some problems when the ZONEMD record first appears. We did consider using a generic record format; however, in consultation with ICANN, we believe that the native format is a better long-term solution.

Schedule

Currently, we are targeting the initial deployment of ZONEMD in the root zone for September 13, 2023. As previously stated, the ZONEMD record will be published first with a private-use hash algorithm number. We are targeting December 6, 2023, as the date to begin using the SHA-384 hash algorithm, at which point the root zone ZONEMD record will become verifiable.

Conclusion

Deploying ZONEMD in the root zone helps to increase the security, stability, and resiliency of the DNS. Soon, recursive name servers that choose to serve root zone data locally will have stronger assurances as to the zone’s validity.

If you’re interested in following the ZONEMD deployment progress, please look for our announcements on the DNS Operations mailing list.

The post Adding ZONEMD Protections to the Root Zone appeared first on Verisign Blog.

]]>
Minimized DNS Resolution: Into the Penumbra https://blog.verisign.com/security/minimized-dns-resolution/ Thu, 30 Mar 2023 19:37:27 +0000 https://blog.verisign.com/?p=6273 green-and-yellow-web-circuit-boardOver the past several years, domain name queries – a critical element of internet communication – have quietly become more secure, thanks, in large part, to a little-known set of technologies that are having a global impact. Verisign CTO Dr. Burt Kaliski covered these in a recent Internet Protocol Journal article, and I’m excited to […]

The post Minimized DNS Resolution: Into the Penumbra appeared first on Verisign Blog.

]]>

Over the past several years, domain name queries – a critical element of internet communication – have quietly become more secure, thanks, in large part, to a little-known set of technologies that are having a global impact. Verisign CTO Dr. Burt Kaliski covered these in a recent Internet Protocol Journal article, and I’m excited to share more about the role Verisign has performed in advancing this work and making one particular technology freely available worldwide.

The Domain Name System (DNS) has long followed a traditional approach of answering queries, where resolvers send a query with the same fully qualified domain name to each name server in a chain of referrals. Then, they generally apply the final answer they receive only to the domain name that was queried for in the original request.

But recently, DNS operators have begun to deploy various “minimization techniques” – techniques aimed at reducing both the quantity and sensitivity of information exchanged between DNS ecosystem components as a means of improving DNS security. Why the shift? As we discussed in a previous blog, it’s all in the interest of bringing the process closer to the “need-to-know” security principle, which emphasizes the importance of sharing only the minimum amount of information required to complete a task or carry out a function. This effort is part of a general, larger movement to reduce the disclosure of sensitive information in our digital world.

As part of Verisign’s commitment to security, stability, and resiliency of the global DNS, the company has worked both to develop qname minimization techniques and to encourage the adoption of DNS minimization techniques in general. We believe strongly in this work since these techniques can reduce the sensitivity of DNS data exchanged between resolvers and both root and TLD servers without adding operational risk to authoritative name server operations.

To help advance this area of technology, in 2015, Verisign announced a royalty-free license to its qname minimization patents in connection with certain Internet Engineering Task Force (IETF) standardization efforts. There’s been a steady increase in support and deployment since that time; as of this writing, roughly 67% of probes were utilizing qname-minimizing resolvers, according to statistics hosted by NLnet Labs. That’s up from just 0.7% in May 2017 – a strong indicator of minimization techniques’ usefulness to the community. At Verisign, we are seeing similar trends with approximately 65% of probes utilizing qname-minimizing resolvers in queries with two labels at .com and .net authoritative name servers, as shown in Figure 1 below.

Graph showing percentage of queries with two labels observed at COM/NET authoritative name servers
Figure 1: A domain name consists of one or more labels. For instance, www.example.com consists of three labels: “www”, “example”, and “com”. This chart suggests that more and more recursive resolvers have implemented qname minimization, which results in fewer queries for domain names with three or more labels. With qname minimization, the resolver would send “example.com,” with two labels, instead of “www.example.com” with all three.

Kaliski’s article, titled “Minimized DNS Resolution: Into the Penumbra,” explores several specific minimization techniques documented by the IETF, reports on their implementation status, and discusses the effects of their adoption on DNS measurement research. An expanded version of the article can be found on the Verisign website.

This piece is just one of the latest to demonstrate Verisign’s continued investment in research and standards development in the DNS ecosystem. As a company, we’re committed to helping shape the DNS of today and tomorrow, and we recognize this is only possible through ongoing contributions by dedicated members of the internet infrastructure community – including the team here at Verisign.

Read more about Verisign’s contributions to this area:

Query Name Minimization and Authoritative DNS Server Behavior – DNS-OARC Spring ’15 Workshop (presentation)

Minimum Disclosure: What Information Does a Name Server Need to Do Its Job? (blog)

Maximizing Qname Minimization: A New Chapter in DNS Protocol Evolution (blog)

A Balanced DNS Information Protection Strategy: Minimize at Root and TLD, Encrypt When Needed Elsewhere (blog)

Information Protection for the Domain Name System: Encryption and Minimization (blog)

The post Minimized DNS Resolution: Into the Penumbra appeared first on Verisign Blog.

]]>
Verisign Domain Name Industry Brief: 350.4 Million Domain Name Registrations in the Fourth Quarter of 2022 https://blog.verisign.com/domain-names/verisign-q4-2022-the-domain-name-industry-brief/ Thu, 09 Mar 2023 20:57:37 +0000 https://blog.verisign.com/?p=6260 Today, we released the latest issue of The Domain Name Industry Brief, which shows that the fourth quarter of 2022 closed with 350.4 million domain name registrations across all top-level domains (TLDs), an increase of 0.5 million domain name registrations, or 0.1%, compared to the third quarter of 2022.1,2 Domain name registrations have increased by […]

The post Verisign Domain Name Industry Brief: 350.4 Million Domain Name Registrations in the Fourth Quarter of 2022 appeared first on Verisign Blog.

]]>

Today, we released the latest issue of The Domain Name Industry Brief, which shows that the fourth quarter of 2022 closed with 350.4 million domain name registrations across all top-level domains (TLDs), an increase of 0.5 million domain name registrations, or 0.1%, compared to the third quarter of 2022.1,2 Domain name registrations have increased by 8.7 million, or 2.6%, year over year.1,2

Check out the latest issue of The Domain Name Industry Brief to see domain name stats from the fourth quarter of 2022, including:
Top 10 Largest TLDs by Number of Reported Domain Names
Top 10 Largest ccTLDs by Number of Reported Domain Names
ngTLDs as Percentage of Total TLDs
Geographical ngTLDs as Percentage of Total Corresponding Geographical TLDs

To see past issues of The Domain Name Industry Brief, please visit https://verisign.com/dnibarchives.

  1. All figure(s) exclude domain names in the .tk, .cf, .ga, .gq, and .ml ccTLDs. Quarterly and year-over-year trends have been calculated relative to historical figures that have also been adjusted to exclude these five ccTLDs. For further information, please see the Editor’s Note contained in Vol. 19, Issue 1 of The Domain Name Industry Brief.
  2. The generic TLD, ngTLD and ccTLD data cited in the brief: (i) includes ccTLD internationalized domain names, (ii) is an estimate as of the time this brief was developed, and (iii) is subject to change as more complete data is received. Some numbers in the brief may reflect standard rounding.

The post Verisign Domain Name Industry Brief: 350.4 Million Domain Name Registrations in the Fourth Quarter of 2022 appeared first on Verisign Blog.

]]>
Verisign’s Role in Securing the DNS Through Key Signing Ceremonies https://blog.verisign.com/security/verisign-dns-key-signing-ceremony/ Wed, 01 Mar 2023 15:13:17 +0000 https://blog.verisign.com/?p=6243 blue and white digital linesEvery few months, an important ceremony takes place. It’s not splashed all over the news, and it’s not attended by global dignitaries. It goes unnoticed by many, but its effects are felt across the globe. This ceremony helps make the internet more secure for billions of people. This unique ceremony began in 2010 when Verisign, […]

The post Verisign’s Role in Securing the DNS Through Key Signing Ceremonies appeared first on Verisign Blog.

]]>

Every few months, an important ceremony takes place. It’s not splashed all over the news, and it’s not attended by global dignitaries. It goes unnoticed by many, but its effects are felt across the globe. This ceremony helps make the internet more secure for billions of people.

This unique ceremony began in 2010 when Verisign, the Internet Corporation for Assigned Names and Numbers (ICANN), and the U.S. Department of Commerce’s National Telecommunications and Information Administration collaborated – with input from the global internet community – to deploy a technology called Domain Name System Security Extensions (DNSSEC) to the Domain Name System (DNS) root zone in a special ceremony. This wasn’t a one-off occurrence in the history of the DNS, though. Instead, these organizations developed a set of processes, procedures, and schedules that would be repeated for years to come. Today, these recurring ceremonies help ensure that the root zone is properly signed, and as a result, the DNS remains secure, stable, and resilient.

In this blog, we take the opportunity to explain these ceremonies in greater detail and describe the critical role that Verisign is honored to perform.

A Primer on DNSSEC, Key Signing Keys, and Zone Signing Keys

DNSSEC is a series of technical specifications that allow operators to build greater security into the DNS. Because the DNS was not initially designed as a secure system, DNSSEC represented an essential leap forward in securing DNS communications. Deploying DNSSEC allows operators to better protect their users, and it helps to prevent common threats such as “man-in-the-middle” attacks. DNSSEC works by using public key cryptography, which allows zone operators to cryptographically sign their zones. This allows anyone communicating with and validating a signed zone to know that their exchanges are genuine.

The root zone, like most signed zones, uses separate keys for zone signing and for key signing. The Key Signing Key (KSK) is separate from the Zone Signing Key (ZSK). However, unlike most zones, the root zone’s KSK and ZSK are operated by different organizations; ICANN serves as the KSK operator and Verisign as the ZSK operator. These separate roles for DNSSEC align naturally with ICANN as the Root Zone Manager and Verisign as the Root Zone Maintainer.

In practice, the KSK/ZSK split means that the KSK only signs the DNSSEC keys, and the ZSK signs all the other records in the zone. Signing with the KSK happens infrequently – only when the keys change. However, signing with the ZSK happens much more frequently – whenever any of the zone’s other data changes.

DNSSEC and Public Key Cryptography

Something to keep in mind before we go further: remember that DNSSEC utilizes public key cryptography, in which keys have both a private and public component. The private component is used to generate signatures and must be guarded closely. The public component is used to verify signatures and can be shared openly. Good cryptographic hygiene says that these keys should be changed (or “rolled”) periodically.

In DNSSEC, changing a KSK is generally difficult, whereas changing a ZSK is relatively easy. This is especially true for the root zone where a KSK rollover requires all validating recursive name servers to update their copy of the trust anchor. Whereas the first and only KSK rollover to date happened after a period of eight years, ZSK rollovers take place every three months. Not coincidentally, this is also how often root zone key signing ceremonies take place.

Why We Have Ceremonies

The notion of holding a “ceremony” for such an esoteric technical function may seem strange, but this ceremony is very different from what most people are used to. Our common understanding of the word “ceremony” brings to mind an event with speeches and formal attire. But in this case, the meaning refers simply to the formality and ritual aspects of the event.

There are two main reasons for holding key signing ceremonies. One is to bring participants together so that everyone may transparently witness the process. Ceremony participants include ICANN staff, Verisign staff, Trusted Community Representatives (TCRs), and external auditors, plus guests on occasion.

The other important reason, of course, is to generate DNSSEC signatures. Occasionally other activities take place as well, such as generating new keys, retiring equipment, and changing TCRs. In this post, we’ll focus only on the signature generation procedures.

The Key Signing Request

A month or two before each ceremony, Verisign generates a file called the Key Signing Request (KSR). This is an XML document which includes the set of public key records (both KSK and ZSK) to be signed and then used during the next calendar quarter. The KSR is securely transmitted from Verisign to the Internet Assigned Numbers Authority (IANA), which is a function of ICANN that performs root zone management. IANA securely stores the KSR until it is needed for the upcoming key signing ceremony.

Each quarter is divided into nine 10-day “slots” (for some quarters, the last slot is extended by a day or two) and the XML file contains nine key “bundles” to be signed. Each bundle, or slot, has a signature inception and expiration timestamp, such that they overlap by at least five days. The first and last slots in each quarter are used to perform ZSK rollovers. During these slots we publish two ZSKs and one KSK in the root zone.

At the Ceremony: Details Matter

The root zone KSK private component is held inside secure Hardware Security Modules (HSMs). These HSMs are stored inside locked safes, which in turn are kept inside locked rooms. At a key signing ceremony, the HSMs are taken out of their safes and activated for use. This all occurs according to a pre-defined script with many detailed steps, as shown in the figure below.

Script for steps during key signing ceremony
Figure 1: A detailed script outlining the exact steps required to activate HSMs, as well as the initials and timestamps of witnesses.

Also stored inside the safe is a laptop computer, its operating system on non-writable media (i.e., DVD), and a set of credentials for the TCRs, stored on smart cards and locked inside individual safe deposit boxes. Once all the necessary items are removed from the safes, the equipment can be turned on and activated.

The laptop computer is booted from its operating system DVD and the HSM is connected via Ethernet for data transfer and serial port for console logging. The TCR credentials are used to activate the HSM. Once activated, a USB thumb drive containing the KSR file is connected to the laptop and the signing program is started.

The signing program reads the KSR, validates it, and then displays information about the keys about to be signed. This includes the signature inception and expiration timestamps, and the ZSK key tag values.

Validate and Process KSR /media/KSR/KSK46/ksr-root-2022-q4-0.xml...
#  Inception           Expiration           ZSK Tags      KSK Tag(CKA_LABEL)
1  2022-10-01T00:00:00 2022-10-22T00:00:00  18733,20826
2  2022-10-11T00:00:00 2022-11-01T00:00:00  18733
3  2022-10-21T00:00:00 2022-11-11T00:00:00  18733
4  2022-10-31T00:00:00 2022-11-21T00:00:00  18733
5  2022-11-10T00:00:00 2022-12-01T00:00:00  18733
6  2022-11-20T00:00:00 2022-12-11T00:00:00  18733
7  2022-11-30T00:00:00 2022-12-21T00:00:00  18733
8  2022-12-10T00:00:00 2022-12-31T00:00:00  18733
9  2022-12-20T00:00:00 2023-01-10T00:00:00  00951,18733
...PASSED.

It also displays an SHA256 hash of the KSR file and a corresponding “PGP (Pretty Good Privacy) Word List.” The PGP Word List is a convenient and efficient way of verbally expressing hexadecimal values:

SHA256 hash of KSR:
ADCE9749F3DE4057AB680F2719B24A32B077DACA0F213AD2FB8223D5E8E7CDEC
>> ringbolt sardonic preshrunk dinosaur upset telephone crackdown Eskimo rhythm gravity artist celebrate bedlamp pioneer dogsled component ruffled inception surmount revenue artist Camelot cleanup sensation watchword Istanbul blowtorch specialist trauma truncated spindle unicorn <<

At this point, a Verisign representative comes forward to verify the KSR. The following actions then take place:

  1. The representative’s identity and proof-of-employment are verified.
  2. They verbalize the PGP Word List based on the KSR sent from Verisign.
  3. TCRs and other ceremony participants compare the spoken list of words to those displayed on the screen.
  4. When the checksum is confirmed to match, the ceremony administrator instructs the program to proceed with generating the signatures.

The signing program outputs a new XML document, called the Signed Key Response (SKR). This document contains signatures over the DNSKEY resource record sets in each of the nine slots. The SKR is saved to a USB thumb drive and given to a member of the Root Zone KSK Operations Security team. Usually sometime the next day, IANA securely transmits the SKR back to Verisign. Following several automatic and manual verification steps, the signature data is imported into Verisign’s root zone management system for use at the appropriate times in the next calendar quarter.

Why We Do It

Keeping the internet’s DNS secure, stable, and resilient is a crucial aspect of Verisign’s role as the Root Zone Maintainer. We are honored to participate in the key signing ceremonies with ICANN and the TCRs and do our part to help the DNS operate as it should.

For more information on root key signing ceremonies, visit the IANA website. Visitors can watch video recordings of previous ceremonies and even sign up to witness the next ceremony live. It’s a great resource, and a unique opportunity to take part in a process that helps keep the internet safe for all.

The post Verisign’s Role in Securing the DNS Through Key Signing Ceremonies appeared first on Verisign Blog.

]]>
Verisign Domain Name Industry Brief: 349.9 Million Domain Name Registrations in the Third Quarter of 2022 https://blog.verisign.com/domain-names/verisign-q3-2022-the-domain-name-industry-brief/ Thu, 08 Dec 2022 20:42:14 +0000 https://blog.verisign.com/?p=6229 Verisign Q3 2022 Domain Name Industry Brief Volume 19 Issue 4 CoverToday, we released the latest issue of The Domain Name Industry Brief, which shows that the third quarter of 2022 closed with 349.9 million domain name registrations across all top-level domains, a decrease of 1.6 million domain name registrations, or 0.4%, compared to the second quarter of 2022.1,2 Domain name registrations have increased by 11.5 […]

The post Verisign Domain Name Industry Brief: 349.9 Million Domain Name Registrations in the Third Quarter of 2022 appeared first on Verisign Blog.

]]>

Today, we released the latest issue of The Domain Name Industry Brief, which shows that the third quarter of 2022 closed with 349.9 million domain name registrations across all top-level domains, a decrease of 1.6 million domain name registrations, or 0.4%, compared to the second quarter of 2022.1,2 Domain name registrations have increased by 11.5 million, or 3.4%, year over year.1,2

Check out the latest issue of The Domain Name Industry Brief to see domain name stats from the third quarter of 2022, including:
Top 10 Largest TLDs by Number of Reported Domain Names
Top 10 Largest ccTLDs by Number of Reported Domain Names
ngTLDs as Percentage of Total TLDs
Geographical ngTLDs as Percentage of Total Corresponding Geographical TLDs

To see past issues of The Domain Name Industry Brief, please visit verisign.com/dnibarchives.

  1. All figure(s) exclude domain names in the .tk, .cf, .ga, .gq and .ml ccTLDs. Quarterly and year-over-year trends have been calculated relative to historical figures that have also been adjusted to exclude these five ccTLDs. For further information, please see the Editor’s Note contained in Vol. 19, Issue 1 of The Domain Name Industry Brief.
  2. The generic TLD, ngTLD and ccTLD data cited in the brief: (i) includes ccTLD internationalized domain names, (ii) is an estimate as of the time this brief was developed and (iii) is subject to change as more complete data is received. Some numbers in the brief may reflect standard rounding.

The post Verisign Domain Name Industry Brief: 349.9 Million Domain Name Registrations in the Third Quarter of 2022 appeared first on Verisign Blog.

]]>
Celebrating 35 Years of the DNS Protocol https://blog.verisign.com/domain-names/35-years-dns-protocol/ Mon, 28 Nov 2022 17:09:52 +0000 https://blog.verisign.com/?p=6221 Celebrating 35 Years of the DNS ProtocolIn 1987, CompuServe introduced GIF images, Steve Wozniak left Apple and IBM introduced the PS/2 personal computer with improved graphics and a 3.5-inch diskette drive. Behind the scenes, one more critical piece of internet infrastructure was quietly taking form to help establish the internet we know today. November of 1987 saw the establishment of the […]

The post Celebrating 35 Years of the DNS Protocol appeared first on Verisign Blog.

]]>

In 1987, CompuServe introduced GIF images, Steve Wozniak left Apple and IBM introduced the PS/2 personal computer with improved graphics and a 3.5-inch diskette drive. Behind the scenes, one more critical piece of internet infrastructure was quietly taking form to help establish the internet we know today.

November of 1987 saw the establishment of the Domain Name System protocol suite as internet standards. This was a development that not only would begin to open the internet to individuals and businesses globally, but also would arguably redefine communications, commerce and access to information for future generations.

Today, the DNS continues to be critical to the operation of the internet as a whole. It has a long and strong track record thanks to the work of the internet’s pioneers and the collaboration of different groups to create volunteer standards.

Let’s take a look back at the journey of the DNS over the years.

Scaling the Internet for All

Prior to 1987, the internet was primarily used by government agencies and members of academia. Back then, the Network Information Center, managed by SRI International, manually maintained a directory of hosts and networks. While the early internet was transformative and forward-thinking, not everyone had access to it.

During that same time period, the U.S. Advanced Research Projects Agency Network, the forerunner to the internet we know now, was evolving into a growing network environment, and new naming and addressing schemes were being proposed. Seeing that there were thousands of interested institutions and companies wanting to explore the possibilities of networked computing, a group of ARPA networking researchers realized that a more modern, automated approach was needed to organize the network’s naming system for anticipated rapid growth.

Two Request for Comments documents, numbered RFC 1034 and RFC 1035, were published in 1987 by the informal Network Working Group, which soon after evolved into the Internet Engineering Task Force. Those RFCs, authored by computer scientist Paul V. Mockapetris, became the standards upon which DNS implementations have been built. It was Mockapetris, inducted into the Internet Hall of Fame in 2012, who specifically suggested a name space where database administration was distributed but could also evolve as needed.

In addition to allowing organizations to maintain their own databases, the DNS simplified the process of connecting a name that users could remember with a unique set of numbers – the Internet Protocol address – that web browsers needed to navigate to a website using a domain name. By not having to remember a seemingly random string of numbers, users could easily get to their intended destination, and more people could access the web. This has worked in a logical way for all internet users – from businesses large and small to everyday people – all around the globe.

With these two aspects of the DNS working together – wide distribution and name-to-address mapping – the DNS quickly took shape and developed into the system we know today.

The Multistakeholder Model and Rough Consensus

Thirty-five years of DNS development and progress is attributable to the collaboration of multiple stakeholders and interest groups – academia, technical community, governments, law enforcement and civil society, plus commercial and intellectual property interests – who continue even today to bring crucial perspectives to the table as it relates to the evolution of the DNS and the internet. These perspectives have lent themselves to critical security developments in the DNS, from assuring protection of intellectual property rights to the more recent stakeholder collaborative efforts to address DNS abuse.

Other major collaborative achievements involve the IETF, which has no formal membership roster or requirements, and is responsible for the technical standards that comprise the internet protocol suite, and the Internet Corporation for Assigned Names and Numbers, which plays a central coordination role in the bottom-up multistakeholder system governing the global DNS. Without constructive and productive voluntary collaboration, the internet as we know it simply isn’t possible.

Indeed, these cooperative efforts marshaled a brand of collaboration known today as “rough consensus.” That term, originally “rough consensus and running code,” gave rise to a more dynamic collaboration process than the “100% consensus from everyone” model. In fact, the term was adopted by the IETF in the early days of establishing the DNS to describe the formation of the dominant view of the working group and the need to quickly implement new technologies, which doesn’t always allow for lengthy discussions and debates. This approach is still in use today, proving its usefulness and longevity.

Recognizing a Milestone

As we look back on how the DNS came to be and the processes that have kept it reliably running, it’s important to recognize the work done by the organizations and individuals that make up this community. We must also remember that the efforts continue to be powered by voluntary collaborations.

Commemorating anniversaries such as 35 years of the DNS protocol allows the multiple stakeholders and communities to pause and reflect on the enormity of the work and responsibility before us. Thanks to the pioneering minds who conceived and built the early infrastructure of the internet, and in particular to Paul Mockapetris’s fundamental contribution of the DNS protocol suite, the world has been able to establish a robust global economy that few could ever have imagined so many years ago.

The 35th anniversary of the publication of RFCs 1034 and 1035 reminds us of the contributions that the DNS has made to the growth and scale of what we know today as “the internet.” That’s a moment worth celebrating.

The post Celebrating 35 Years of the DNS Protocol appeared first on Verisign Blog.

]]>
Verisign Q2 2022 Domain Name Industry Brief: 351.5 Million Domain Name Registrations in the Second Quarter of 2022 https://blog.verisign.com/domain-names/verisign-q2-2022-the-domain-name-industry-brief/ Tue, 20 Sep 2022 20:17:00 +0000 https://blog.verisign.com/?p=6183 Today, we released the latest issue of The Domain Name Industry Brief, which shows that the second quarter of 2022 closed with 351.5 million domain name registrations across all top-level domains, an increase of 1.0 million domain name registrations, or 0.3%, compared to the first quarter of 2022.1,2 Domain name registrations have increased by 10.4 […]

The post Verisign Q2 2022 Domain Name Industry Brief: 351.5 Million Domain Name Registrations in the Second Quarter of 2022 appeared first on Verisign Blog.

]]>

Today, we released the latest issue of The Domain Name Industry Brief, which shows that the second quarter of 2022 closed with 351.5 million domain name registrations across all top-level domains, an increase of 1.0 million domain name registrations, or 0.3%, compared to the first quarter of 2022.1,2 Domain name registrations have increased by 10.4 million, or 3.0%, year over year.1,2

the second quarter of 2022 closed with 351.5 million domain name registrations across all top-level domains, an increase of 1.0 million domain name registrations, or 0.3%, compared to the first quarter of 2022.

Check out the latest issue of The Domain Name Industry Brief to see domain name stats from the second quarter of 2022, including:
Top 10 Largest TLDs by Number of Reported Domain Names
Top 10 Largest ccTLDs by Number of Reported Domain Names
ngTLDs as Percentage of Total TLDs
Geographical ngTLDs as Percentage of Total Corresponding Geographical TLDs

To see past issues of The Domain Name Industry Brief, please visit verisign.com/dnibarchives.

  1. All figure(s) exclude domain names in the .tk, .cf, .ga, .gq and .ml ccTLDs. Quarterly and year-over-year trends have been calculated relative to historical figures that have also been adjusted to exclude these five ccTLDs. For further information, please see the Editor’s Note contained in Vol. 19, Issue 1 of The Domain Name Industry Brief.
  2. The generic TLD, ngTLD and ccTLD data cited in the brief: (i) includes ccTLD internationalized domain names, (ii) is an estimate as of the time this brief was developed and (iii) is subject to change as more complete data is received. Some numbers in the brief may reflect standard rounding.

The post Verisign Q2 2022 Domain Name Industry Brief: 351.5 Million Domain Name Registrations in the Second Quarter of 2022 appeared first on Verisign Blog.

]]>
ICANN’s Accountability and Transparency – a Retrospective on the IANA Transition https://blog.verisign.com/security/retrospective-iana-transition/ Mon, 19 Sep 2022 15:57:37 +0000 https://blog.verisign.com/?p=6192 Verisign LogoAs we passed five years since the Internet Assigned Numbers Authority transition took place, my co-authors and I paused to look back on this pivotal moment; to take stock of what we’ve learned and to re-examine some of the key events leading up to the transition and how careful planning ensured a successful transfer of […]

The post ICANN’s Accountability and Transparency – a Retrospective on the IANA Transition appeared first on Verisign Blog.

]]>

As we passed five years since the Internet Assigned Numbers Authority transition took place, my co-authors and I paused to look back on this pivotal moment; to take stock of what we’ve learned and to re-examine some of the key events leading up to the transition and how careful planning ensured a successful transfer of IANA responsibilities from the United States Government to the Internet Corporation for Assigned Names and Numbers. I’ve excerpted the main themes from our work, which can be found in full on the Internet Governance Project blog.

In March 2014, the National Telecommunications and Information Administration, a division of the U.S. Department of Commerce, announced its intent to “transition key Internet domain name functions to the global multi-stakeholder community” and asked ICANN to “convene global stakeholders to develop a proposal to transition the current role played by NTIA in the coordination of the Internet’s domain name system.” This transition, as announced by NTIA, was a natural progression of ICANN’s multi-stakeholder evolution, and an outcome that was envisioned by its founders.

While there was general support for a transition to the global multi-stakeholder community, many in the ICANN community raised concerns about ICANN’s accountability, transparency and organizational readiness to “stand alone” without NTIA’s legacy supervision. In response, the ICANN community began a phase of intense engagement to ensure a successful transition with all necessary accountability and transparency structures and mechanisms in place.

As a result of this meticulous planning, we believe the IANA functions have been well-served by the transition and the new accountability structures designed and developed by the ICANN community to ensure the security, stability and resiliency of the internet’s unique identifiers.

But what does the future hold? While ICANN’s multi-stakeholder processes and accountability structures are functioning, even in the face of a global pandemic that interrupted our ability to gather and engage in person, they will require ongoing care to ensure they deliver on the original vision of private-sector-led management of the DNS.

The post ICANN’s Accountability and Transparency – a Retrospective on the IANA Transition appeared first on Verisign Blog.

]]>
Verisign Q1 2022 Domain Name Industry Brief: 350.5 Million Domain Name Registrations in the First Quarter of 2022 https://blog.verisign.com/domain-names/verisign-q1-2022-the-domain-name-industry-brief/ Thu, 30 Jun 2022 20:13:08 +0000 https://blog.verisign.com/?p=6168 Verisign Q1 2022 Domain Name Industry Brief Volume 19 Issue 2 CoverToday, we released the latest issue of The Domain Name Industry Brief, which shows that the first quarter of 2022 closed with 350.5 million domain name registrations across all top-level domains, an increase of 8.8 million domain name registrations, or 2.6%, compared to the fourth quarter of 2021.1,2 Domain name registrations have increased by 13.2 […]

The post Verisign Q1 2022 Domain Name Industry Brief: 350.5 Million Domain Name Registrations in the First Quarter of 2022 appeared first on Verisign Blog.

]]>

Today, we released the latest issue of The Domain Name Industry Brief, which shows that the first quarter of 2022 closed with 350.5 million domain name registrations across all top-level domains, an increase of 8.8 million domain name registrations, or 2.6%, compared to the fourth quarter of 2021.1,2 Domain name registrations have increased by 13.2 million, or 3.9%, year over year.1,2

Check out the latest issue of The Domain Name Industry Brief to see domain name stats from the first quarter of 2022, including:
Top 10 Largest TLDs by Number of Reported Domain Names
Top 10 Largest ccTLDs by Number of Reported Domain Names
ngTLDs as Percentage of Total TLDs
Geographical ngTLDs as Percentage of Total Corresponding Geographical TLDs

To see past issues of The Domain Name Industry Brief, please visit verisign.com/dnibarchives.

  1. All figure(s) exclude domain names in the .tk, .cf, .ga, .gq and .ml ccTLDs. Quarterly and year-over-year trends have been calculated relative to historical figures that have also been adjusted to exclude these five ccTLDs. For further information, please see the Editor’s Note contained in Vol 19, Issue 1 of The Domain Name Industry Brief.
  2. The generic TLD, ngTLD and ccTLD data cited in the brief: (i) includes ccTLD internationalized domain names, (ii) is an estimate as of the time this brief was developed and (iii) is subject to change as more complete data is received. Some numbers in the brief may reflect standard rounding.

The post Verisign Q1 2022 Domain Name Industry Brief: 350.5 Million Domain Name Registrations in the First Quarter of 2022 appeared first on Verisign Blog.

]]>
Celebrating Women Engineers Today and Every Day at Verisign https://blog.verisign.com/security/women-in-engineering-day/ Thu, 23 Jun 2022 13:22:47 +0000 https://blog.verisign.com/?p=6151 Celebrating Verisign's women engineers for INWED 2022Today, as the world celebrates International Women in Engineering Day, we recognize and honor women engineers at Verisign, whose own stories have helped shape dreams and encouraged young women and girls to take up engineering careers. Here are three of their stories: Shama Khakurel, Senior Software Engineer When Shama Khakurel was in high school, she […]

The post Celebrating Women Engineers Today and Every Day at Verisign appeared first on Verisign Blog.

]]>

Today, as the world celebrates International Women in Engineering Day, we recognize and honor women engineers at Verisign, whose own stories have helped shape dreams and encouraged young women and girls to take up engineering careers.

Here are three of their stories:

Shama Khakurel, Senior Software Engineer

When Shama Khakurel was in high school, she aspired to join the medical field. But she quickly realized that classes involving math or engineering came easiest to her, much more so than her work in biology or other subjects. It wasn’t until she took a summer computer programming course called “Lotus and dBase Programming” that she realized her career aspirations had officially changed; from that point on, she wanted to be an engineer.

In the nearly 20 years she’s been at Verisign, she’s expanded her skills, challenged herself, pursued opportunities – and always had the support of managers who mentored her along the way.

“Verisign has given me every opportunity to grow,” Shama says. And even though she continues to “learn something new every day,” she also provides mentorship to younger engineering employees.

Women tend to shy away from engineering roles, she says, because they think that math and science are harder subjects. “They seem to follow and believe that myth, but there is a lot of opportunity for a woman in this field.”

Vinaya Shenoy, Engineering Manager

For Vinaya Shenoy, an engineering manager who has worked for Verisign for 17 years, a passion for math and science at a young age steered her toward a career in computer science engineering.

She draws inspiration from other women who are industry leaders and immigrants from India and who made it to top rank with their determination and leadership skills. She credits their stories with helping her see what all women are capable of, especially in unconventional or unexpected areas.

“Engineering is not just coding. There are a lot of areas within engineering that you can explore and pursue,” she says. “If problem-solving and creating are your passions, you can harness the power of technology to solve problems and give back to the community.”

Tuyet Vuong, Software Engineer

Tuyet Vuong is one of those people who enjoys problem-solving. As a young girl and the child of two physics teachers, she would often build small gadgets – perhaps her own clock or a small fan – from things she would find around the house.

Today, the challenges are bigger and have a greater impact, and she still finds herself enjoying them.

“Engineering is a fun, exciting and rewarding discipline where you can explore and build new things that are helpful to society,” says Tuyet. And sharing the insights and experiences of so many talented people – both men and women – is what makes the role that much more rewarding.

That sense of fulfillment also comes from breaking down stereotypes, such as the attitudes about women only being suitable for a limited number of careers when she was growing up in Vietnam. That’s why she’s a firm believer that mentoring and encouraging young women engineers isn’t just the responsibility of other women.

“The effort should come from both genders,” she says. “The effort shouldn’t come from women alone.”

Looking to the Future

At Verisign, we see the real impact of all our women engineers’ contributions when it comes to ensuring that the internet is secure, stable and resilient. Today and every day, we celebrate Verisign’s women engineers. We thank you for all you’ve done and everything you’re yet to accomplish.


If you’re interested in pursuing your passion for engineering, view our open career opportunities here.

The post Celebrating Women Engineers Today and Every Day at Verisign appeared first on Verisign Blog.

]]>
More Mysterious DNS Root Query Traffic from a Large Cloud/DNS Operator https://blog.verisign.com/security/root-dns-query/ Thu, 02 Jun 2022 19:08:00 +0000 https://blog.verisign.com/?p=6122 Mysterious DNS Root Query Traffic from a Large Cloud/DNS OperatorThis blog was also published by APNIC. With so much traffic on the global internet day after day, it’s not always easy to spot the occasional irregularity. After all, there are numerous layers of complexity that go into the serving of webpages, with multiple companies, agencies and organizations each playing a role. That’s why when […]

The post More Mysterious DNS Root Query Traffic from a Large Cloud/DNS Operator appeared first on Verisign Blog.

]]>

This blog was also published by APNIC.

With so much traffic on the global internet day after day, it’s not always easy to spot the occasional irregularity. After all, there are numerous layers of complexity that go into the serving of webpages, with multiple companies, agencies and organizations each playing a role.

That’s why when something does catch our attention, it’s important that the various entities work together to explore the cause and, more importantly, try to identify whether it’s a malicious actor at work, a glitch in the process or maybe even something entirely intentional.

That’s what occurred last year when Internet Corporation for Assigned Names and Numbers staff and contractors were analyzing names in Domain Name System queries seen at the ICANN Managed Root Server, and the analysis program ran out of memory for one of their data files. After some investigating, they found the cause to be a very large number of mysterious queries for unique names such as f863zvv1xy2qf.surgery, bp639i-3nirf.hiphop, qo35jjk419gfm.net and yyif0aijr21gn.com.

While these were queries for names in existing top-level domains, the first label consisted of 12 or 13 random-looking characters. After ICANN shared their discovery with the other root server operators, Verisign took a closer look to help understand the situation.

Exploring the Mystery

One of the first things we noticed was that all of these mysterious queries were of type NS and came from one autonomous system network, AS 15169, assigned to Google LLC. Additionally, we confirmed that it was occurring consistently for numerous TLDs. (See Fig. 1)

Distribution of second-level label lengths in NS queries from AS 15169
Figure 1: Distribution of second-level label lengths in queries to root name servers, comparing AS 15169 to others, for several different TLDs.

Although this phenomenon was newly uncovered, analysis of historical data showed these traffic patterns actually began in late 2019. (See Fig. 2)

Daily count of NS Queries from AS 15169
Figure 2: Historical data shows the mysterious queries began in late 2019.

Perhaps the most interesting discovery, however, was that these specific query names were not also seen at the .com and .net name servers operated by Verisign. The data in Figure 3 shows the fraction of queried names that appear at A-root and J-root and also appear on the .com and .net name servers. For second-level labels of 12 and 13 characters, this fraction is essentially zero. The graphs also show that there appears to be queries for names with second-level label lengths of 10 and 11 characters, which are also absent from the TLD data.

Fraction of SLDs seen at A/J-root also seen at TLD (AS 15169 queries)
Figure 3: Fraction of queries from AS 15169 appearing on A-root and J-root that also appear on .com and .net name servers, by the length of the second-level label.

The final mysterious aspect to this traffic is that it deviated from our normal expectation of caching. Remember that these are queries to a root name server, which returns a referral to the delegated name servers for a TLD. For example, when a root name server receives a query for yyif0aijr21gn.com, the response is a list of the name servers that are authoritative for the .com zone. The records in this response have a time to live of two days, meaning that the recursive name server can cache and reuse this data for that amount of time.

However, in this traffic we see queries for .com domain names from AS 15169 at the rate of about 30 million per day. (See Fig. 4) It is well known that Google Public DNS has thousands of backend servers and limits TTLs to a maximum of six hours. Assuming 4,000 backend servers each cached a .com referral for six hours, we might expect about 16,000 queries over a 24-hour period. The observed count is about 2,000 times higher by this back-of-the-envelope calculation.

Queries per day from AS 15169 to A/J-root for names with second-level label length equal to 12 or 13 (July 6, 2021)
Figure 4: Queries per day from AS 15169, for names with second-level label length equal to 12 or 13, over a 24-hour period.

From our initial analysis, it was unclear if these queries represented legitimate end-user activity, though we were confident that source IP address spoofing was not involved. However, since the query names shared some similarities to those used by botnets, we could not rule out malicious activity.

The Missing Piece

These findings were presented last year at the DNS-OARC 35a virtual meeting. In the conference chat room after the talk, the missing piece of this puzzle was mentioned by a conference participant. There is a Google webpage describing its public DNS service that talks about prepending nonce (i.e., random) labels for cache misses to increase entropy. In what came to be known as “the Kaminsky Attack,” an attacker can cause a recursive name server to emit queries for names chosen by the attacker. Prepending a nonce label adds unpredictability to the queries, making it very difficult to spoof a response. Note, however, that nonce prepending only works for queries where the reply is a referral.

In addition, Google DNS has implemented a form of query name minimization (see RFC 7816 and RFC 9156). As such, if a user requests the IP address of www.example.com and Google DNS decides this warrants a query to a root name server, it takes the name, strips all labels except for the TLD and then prepends a nonce string, resulting in something like u5vmt7xanb6rf.com. A root server’s response to that query is identical to one using the original query name.

The Mystery Explained

Now, we are able to explain nearly all of the mysterious aspects of this query traffic from Google. We see random second-level labels because of the nonce strings that are designed to prevent spoofing. The 12- and 13-character-long labels are most likely the result of converting a 64-bit random value into an unpadded ASCII label with encoding similar to Base32. We don’t observe the same queries at TLD name servers because of both the nonce prepending and query name minimization. The query type is always NS because of query name minimization.

With that said, there’s still one aspect that eludes explanation: the high query rate (2000x for .com) and apparent lack of caching. And so, this aspect of the mystery continues.

Wrapping Up

Even though we haven’t fully closed the books on this case, one thing is certain: without the community’s teamwork to put the pieces of the puzzle together, explanations for this strange traffic may have remained unknown today. The case of the mysterious DNS root query traffic is a perfect example of the collaboration that’s required to navigate today’s ever-changing cyber environment. We’re grateful and humbled to be part of such a dedicated community that is intent on ensuring the security, stability and resiliency of the internet, and we look forward to more productive teamwork in the future.

The post More Mysterious DNS Root Query Traffic from a Large Cloud/DNS Operator appeared first on Verisign Blog.

]]>
Verisign Q4 2021 The Domain Name Industry Brief: 341.7 Million Domain Name Registrations in the Fourth Quarter of 2021 https://blog.verisign.com/domain-names/verisign-q4-2021-the-domain-name-industry-brief/ Fri, 15 Apr 2022 16:07:22 +0000 https://blog.verisign.com/?p=6110 Today, we released the latest issue of The Domain Name Industry Brief, which shows that the fourth quarter of 2021 closed with 341.7 million domain name registrations across all top-level domains, an increase of 3.3 million domain name registrations, or 1.0%, compared to the third quarter of 2021.1,2 Domain name registrations have increased by 1.6 […]

The post Verisign Q4 2021 The Domain Name Industry Brief: 341.7 Million Domain Name Registrations in the Fourth Quarter of 2021 appeared first on Verisign Blog.

]]>

Today, we released the latest issue of The Domain Name Industry Brief, which shows that the fourth quarter of 2021 closed with 341.7 million domain name registrations across all top-level domains, an increase of 3.3 million domain name registrations, or 1.0%, compared to the third quarter of 2021.1,2 Domain name registrations have increased by 1.6 million, or 0.5%, year over year.1,2

Q4 2021 Domain Name Industry Brief. Graph of domain name registrations across all tlds

Check out the latest issue of The Domain Name Industry Brief to see domain name stats from the fourth quarter of 2021, including:
Top 10 Largest TLDs by Number of Reported Domain Names
Top 10 Largest ccTLDs by Number of Reported Domain Names
ngTLDs as Percentage of Total TLDs
Geographical ngTLDs as Percentage of Total Corresponding Geographical TLDs

To see past issues of The Domain Name Industry Brief, please visit verisign.com/dnibarchives.


  1. All figure(s) exclude domain names in the .tk, .cf, .ga, .gq and .ml ccTLDs. Quarterly and year-over-year trends have been calculated relative to historical figures that have also been adjusted to exclude these five ccTLDs. For further information, please see the Editor’s Note contained in the full Domain Name Industry Brief.
  2. The generic TLD, ngTLD and ccTLD data cited in the brief: (i) includes ccTLD internationalized domain names, (ii) is an estimate as of the time this brief was developed and (iii) is subject to change as more complete data is received. Some numbers in the brief may reflect standard rounding.

The post Verisign Q4 2021 The Domain Name Industry Brief: 341.7 Million Domain Name Registrations in the Fourth Quarter of 2021 appeared first on Verisign Blog.

]]>
Routing Without Rumor: Securing the Internet’s Routing System https://blog.verisign.com/security/routing-security/ Wed, 09 Mar 2022 19:29:28 +0000 https://blog.verisign.com/?p=6096 colorful laptopThis article is based on a paper originally published as part of the Global Commission on the Stability of Cyberspace’s Cyberstability Paper Series, “New Conditions and Constellations in Cyber,” on Dec. 9, 2021. The Domain Name System has provided the fundamental service of mapping internet names to addresses from almost the earliest days of the […]

The post Routing Without Rumor: Securing the Internet’s Routing System appeared first on Verisign Blog.

]]>

This article is based on a paper originally published as part of the Global Commission on the Stability of Cyberspace’s Cyberstability Paper Series, “New Conditions and Constellations in Cyber,” on Dec. 9, 2021.

The Domain Name System has provided the fundamental service of mapping internet names to addresses from almost the earliest days of the internet’s history. Billions of internet-connected devices use DNS continuously to look up Internet Protocol addresses of the named resources they want to connect to — for instance, a website such as blog.verisign.com. Once a device has the resource’s address, it can then communicate with the resource using the internet’s routing system.

Just as ensuring that DNS is secure, stable and resilient is a priority for Verisign, so is making sure that the routing system has these characteristics. Indeed, DNS itself depends on the internet’s routing system for its communications, so routing security is vital to DNS security too.

To better understand how these challenges can be met, it’s helpful to step back and remember what the internet is: a loosely interconnected network of networks that interact with each other at a multitude of locations, often across regions or countries.

Packets of data are transmitted within and between those networks, which utilize a collection of technical standards and rules called the IP suite. Every device that connects to the internet is uniquely identified by its IP address, which can take the form of either a 32-bit IPv4 address or a 128-bit IPv6 address. Similarly, every network that connects to the internet has an Autonomous System Number, which is used by routing protocols to identify the network within the global routing system.

The primary job of the routing system is to let networks know the available paths through the internet to specific destinations. Today, the system largely relies on a decentralized and implicit trust model — a hallmark of the internet’s design. No centralized authority dictates how or where networks interconnect globally, or which networks are authorized to assert reachability for an internet destination. Instead, networks share knowledge with each other about the available paths from devices to destination: They route “by rumor.”

The Border Gateway Protocol

Under the Border Gateway Protocol, the internet’s de-facto inter-domain routing protocol, local routing policies decide where and how internet traffic flows, but each network independently applies its own policies on what actions it takes, if any, with data that connects through its network.

BGP has scaled well over the past three decades because 1) it operates in a distributed manner, 2) it has no central point of control (nor failure), and 3) each network acts autonomously. While networks may base their routing policies on an array of pricing, performance and security characteristics, ultimately BGP can use any available path to reach a destination. Often, the choice of route may depend upon personal decisions by network administrators, as well as informal assessments of technical and even individual reliability.

Route Hijacks and Route Leaks

Two prominent types of operational and security incidents occur in the routing system today: route hijacks and route leaks. Route hijacks reroute internet traffic to an unintended destination, while route leaks propagate routing information to an unintended audience. Both types of incidents can be accidental as well as malicious.

Preventing route hijacks and route leaks requires considerable coordination in the internet community, a concept that fundamentally goes against the BGP’s design tenets of distributed action and autonomous operations. A key characteristic of BGP is that any network can potentially announce reachability for any IP addresses to the entire world. That means that any network can potentially have a detrimental effect on the global reachability of any internet destination.

Resource Public Key Infrastructure

Fortunately, there is a solution already receiving considerable deployment momentum, the Resource Public Key Infrastructure. RPKI provides an internet number resource certification infrastructure, analogous to the traditional PKI for websites. RPKI enables number resource allocation authorities and networks to specify Route Origin Authorizations that are cryptographically verifiable. ROAs can then be used by relying parties to confirm the routing information shared with them is from the authorized origin.

RPKI is standards-based and appears to be gaining traction in improving BGP security. But it also brings new challenges.

Specifically, RPKI creates new external and third-party dependencies that, as adoption continues, ultimately replace the traditionally autonomous operation of the routing system with a more centralized model. If too tightly coupled to the routing system, these dependencies may impact the robustness and resilience of the internet itself. Also, because RPKI relies on DNS and DNS depends on the routing system, network operators need to be careful not to introduce tightly coupled circular dependencies.

Regional Internet Registries, the organizations responsible for top-level number resource allocation, can potentially have direct operational implications on the routing system. Unlike DNS, the global RPKI as deployed does not have a single root of trust. Instead, it has multiple trust anchors, one operated by each of the RIRs. RPKI therefore brings significant new security, stability and resiliency requirements to RIRs, updating their traditional role of simply allocating ASNs and IP addresses with new operational requirements for ensuring the availability, confidentiality, integrity, and stability of this number resource certification infrastructure.

As part of improving BGP security and encouraging adoption of RPKI, the routing community started the Mutually Agreed Norms for Routing Security initiative in 2014. Supported by the Internet Society, MANRS aims to reduce the most common routing system vulnerabilities by creating a culture of collective responsibility towards the security, stability and resiliency of the global routing system. MANRS is continuing to gain traction, guiding internet operators on what they can do to make the routing system more reliable.

Conclusion

Routing by rumor has served the internet well, and a decade ago it may have been ideal because it avoided systemic dependencies. However, the increasingly critical role of the internet and the evolving cyberthreat landscape require a better approach for protecting routing information and preventing route leaks and route hijacks. As network operators deploy RPKI with security, stability and resiliency, the billions of internet-connected devices that use DNS to look up IP addresses can then communicate with those resources through networks that not only share routing information with one another as they’ve traditionally done, but also do something more. They’ll make sure that the routing information they share and use is secure — and route without rumor.

The post Routing Without Rumor: Securing the Internet’s Routing System appeared first on Verisign Blog.

]]>
Observations on Resolver Behavior During DNS Outages https://blog.verisign.com/security/facebook-dns-outage/ Thu, 20 Jan 2022 19:14:28 +0000 https://blog.verisign.com/?p=6061 Globe with caution signsWhen an outage affects a component of the internet infrastructure, there can often be downstream ripple effects affecting other components or services, either directly or indirectly. We would like to share our observations of this impact in the case of two recent such outages, measured at various levels of the DNS hierarchy, and discuss the […]

The post Observations on Resolver Behavior During DNS Outages appeared first on Verisign Blog.

]]>

When an outage affects a component of the internet infrastructure, there can often be downstream ripple effects affecting other components or services, either directly or indirectly. We would like to share our observations of this impact in the case of two recent such outages, measured at various levels of the DNS hierarchy, and discuss the resultant increase in query volume due to the behavior of recursive resolvers.

During the beginning of October 2021, the internet saw two significant outages, affecting Facebook’s services and the .club top level domain, both of which did not properly resolve for a period of time. Throughout these outages, Verisign and other DNS operators reported significant increases in query volume. We provided consistent responses throughout, with the correct delegation data pointing to the correct nameservers.

While these higher query rates do not impact Verisign’s ability to respond, they raise a broader operational question – whether the repeated nature of these queries, indicative of a lack of negative caching, might potentially be mistaken for a denial-of-service attack.

Facebook

On Oct. 4, 2021, Facebook experienced a widespread outage, lasting nearly six hours. During this time most of its systems were unreachable, including those that provide Facebook’s DNS service. The outage impacted facebook.com, instagram.com, whatsapp.net and other domain names.

Under normal conditions, the .com and .net authoritative name servers answer about 7,000 queries per second in total for the three domain names previously mentioned. During this particular outage, however, query rates for these domain names reached upwards of 900,000 queries per second (an increase of more than 100x), as shown in Figure 1 below.

Rate of DNS queries

Figure 1: Rate of DNS queries for Facebook’s domain names during the 10/4/21 outage.

During this outage, recursive name servers received no response from Facebook’s name servers – instead, those queries timed out. In situations such as this, recursive name servers generally return a SERVFAIL or “server failure” response, presented to end users as a “this site can’t be reached” error.

Figure 1 shows an increasing query rate over the duration of the outage. Facebook uses relatively low Time-to-Lives (TTLs), a setting that tells DNS resolvers how long to cache an answer on their DNS records before issuing a new query, of from one to five minutes. This in turn means that, five minutes into the outage, all relevant records would have expired from all recursive resolver caches – or at least from those that honor the publisher’s TTLs. It is not immediately clear why the query rate continues to climb throughout the outage, nor whether it would eventually have plateaued had the outage continued.

To get a sense of where the traffic comes from, we group query sources by their autonomous system number. The top five autonomous systems, along with all others grouped together, are shown in Figure 2.

Rate of DNS queries grouped by AS

Figure 2: Rate of DNS queries for Facebook’s domain names grouped by source autonomous system.

From Figure 2 we can see that, at their peak, queries for these domain names to Verisign’s .com and .net authoritative name servers from the most active recursive resolvers – those of Google and Cloudflare – increased around 7,000x and 2,000x respectively over their average non-outage rates.

.club

On Oct. 7th, 2021, three days after Facebook’s outage, the .club and .hsbc TLDs also experienced a three-hour outage. In this case, the relevant authoritative servers remained reachable, but responded with SERVFAIL messages. The effect on recursive resolvers was essentially the same: Since they did not receive useful data, they repeatedly retried their queries to the parent zone. During the incident, the Verisign-operated A-root and J-root servers observed an increase in queries for .club domain names of 45x, from 80 queries per second before, to 3,700 queries per second during the outage.

Rate of DNS queries to A and J root servers

Figure 3: Rate of DNS queries to A and J root servers during the 10/7/2021 .club outage.

Similar to the previous example, this outage also demonstrated an increasing trend in query rate during the duration of the outage. In this case, it might be explained by the fact that the records for .club’s delegation in the root zone use two-day TTLs. However, the theoretical analysis is complicated by the fact that authoritative name server records in child zones use longer TTLs (six days), while authoritative name server address records use shorter TTLs (10 minutes). Here we do not observe a significant amount of query traffic from Google sources; instead, the increased query volume is largely attributable to the long tail of recursive resolvers in “All Others.”

Rate of DNS queries for .club and .hsbc

Figure 4: Rate of DNS queries for .club and .hsbc, grouped by source autonomous system.

Botnet Traffic

Earlier this year Verisign implemented a botnet sinkhole and analyzed the received traffic. This botnet utilizes more than 1,500 second-level domain names, likely for command and control. We observed queries from approximately 50,000 clients every day. As an experiment, we configured our sinkhole name servers to return SERVFAIL and REFUSED responses for two of the botnet domain names.

When configured to return a valid answer, each domain name’s query rate peaks at about 50 queries per second. However, when configured to return SERVFAIL, the query rate for a single domain name increases to 60,000 per second, as shown in Figure 5. Further, the query rate for the botnet domain name also increases at the TLD and root name servers, even though those services are functioning normally and data relevant to the botnet domain name has not changed – just as with the two outages described above. Figure 6 shows data from the same experiment (although for a different date), colored by the source autonomous system. Here, we can see that approximately half of the increased query traffic is generated by one organization’s recursive resolvers.

Query rates to name servers

Figure 5: Query rates to name servers for one domain name experimentally configured to return SERVFAIL.

Query rates to botnet sinkho

Figure 6: Query rates to botnet sinkhole name servers, grouped by autonomous system, when one domain name is experimentally configured to return SERVFAIL.

Common Threads

These two outages and one experiment all demonstrate that recursive name servers can become unnecessarily aggressive when responses to queries are not received due to connectivity issues, timeouts, or misconfigurations.

In each of these three cases, we observe significant query rate increases from recursive resolvers across the internet – with particular contributors, such as Google Public DNS and Cloudflare’s resolver, identified on each occasion.

Often in cases like this we turn to internet standards for guidance. RFC 2308 is a 1998 Standards Track specification that describes negative caching of DNS queries. The RFC covers name errors (e.g., NXDOMAIN), no data, server failures, and timeouts. Unfortunately, it states that negative caching for server failures and timeouts is optional. We have submitted an Internet-Draft that proposes updating RFC 2308 to require negative caching for DNS resolution failures.

We believe it is important for the security, stability and resiliency of the internet’s DNS infrastructure that the implementers of recursive resolvers and public DNS services carefully consider how their systems behave in circumstances where none of a domain name’s authoritative name servers are providing responses, yet the parent zones are providing proper referrals. We feel it is difficult to rationalize the patterns that we are currently observing, such as hundreds of queries per second from individual recursive resolver sources. The global DNS would be better served by more appropriate rate limiting, and algorithms such as exponential backoff, to address these types of cases we’ve highlighted here. Verisign remains committed to leading and contributing to the continued security, stability and resiliency of the DNS for all global internet users.

This piece was co-authored by Verisign Fellow Duane Wessels and Verisign Distinguished Engineers Matt Thomas and Yannis Labrou.

The post Observations on Resolver Behavior During DNS Outages appeared first on Verisign Blog.

]]>
IRP Panel Sanctions Afilias, Clears the Way for ICANN to Decide .web Disputes https://blog.verisign.com/domain-names/irp-panel-sanctions-afilias-clears-the-way-for-icann-to-decide-web-disputes/ Tue, 28 Dec 2021 18:48:44 +0000 https://blog.verisign.com/?p=6050 Verisign LogoThe .web Independent Review Process (IRP) Panel issued a Final Decision six months ago, in May 2021. Immediately thereafter, the claimant, Afilias Domains No. 3 Limited (now a shell entity known as AltaNovo Domains Limited), filed an application seeking reconsideration of the Final Decision under Rule 33 of the arbitration rules. Rule 33 allows for […]

The post IRP Panel Sanctions Afilias, Clears the Way for ICANN to Decide .web Disputes appeared first on Verisign Blog.

]]>

The .web Independent Review Process (IRP) Panel issued a Final Decision six months ago, in May 2021. Immediately thereafter, the claimant, Afilias Domains No. 3 Limited (now a shell entity known as AltaNovo Domains Limited), filed an application seeking reconsideration of the Final Decision under Rule 33 of the arbitration rules. Rule 33 allows for the clarification of an ambiguous ruling and allows the Panel the opportunity to supplement its decision if it inadvertently failed to consider a claim or defense, but specifically does not permit wholesale reconsideration of a final decision. The problem for Afilias’ application, as we said at the time, was that it sought exactly that.

The Panel ruled on Afilias’ application on Dec. 21, 2021. In this latest ruling, the Panel not only rejected Afilias’ application in its entirety, but went further and sanctioned Afilias for having filed it in the first place. Quoting from the ruling:

In the opinion of the Panel, under the guise of seeking an additional decision, the Application is seeking reconsideration of core elements of the Final Decision. Likewise, under the guise of seeking interpretation, the Application is requesting additional declarations and advisory opinions on a number of questions, some of which had not been discussed in the proceedings leading to the Final Decision.

In such circumstances, the Panel cannot escape the conclusion that the Application is “frivolous” in the sense of it “having no sound basis (as in fact or law).” This finding suffices to entitle [ICANN] to the cost shifting decision it is seeking…the Panel hereby unanimously…Grants [ICANN’s] request that the Panel shift liability for the legal fees incurred by [ICANN] in connection with the Application, fixes at US $236,884.39 the amount of the legal fees to be reimbursed to [ICANN] by [Afilias]…and orders [Afilias] to pay this amount to [ICANN] within thirty (30) days….

In light of the Panel’s finding that Afililas’ Rule 33 application was so improper and frivolous as to be sanctionable, a serious question arises about the motives in filing it. Reading the history of the .web proceedings, one possible motivation is becoming more clear. The community will recall that, five years ago, Donuts (through its wholly-owned subsidiary Ruby Glen) failed in its bid to enjoin the .web auction when a federal court rejected false allegations that Nu Dot Co (NDC) had failed to disclose an ownership change. After the auction was conducted, Afilias then picked up the litigation baton from Donuts. Afilias’ IRP complaint demanded that the arbitration Panel nullify the auction results, and award .web to itself, thereby bypassing ICANN completely. In the May 2021 Final Decision the IRP Panel gave an unsurprising but firm “no” to Afilias’ request to supplant ICANN’s role, and instead directed ICANN’s Board to review the complaints about the conduct of the .web contention set members and then make a determination on delegation.

A result of this five-year battle has been to prevent ICANN from passing judgment on the .web situation. These proceedings have unsuccessfully sought to have courts and arbitrators stand in the shoes of ICANN, rather than letting ICANN discharge its mandated duty to determine what, if anything, should be done in response to the allegations regarding the pre-auction conduct of the contention set. This conduct includes Afilias’ own wrongdoing in violating the pre-auction communications blackout imposed in the Auction Rules. That misconduct is set forth in a July 23, 2021 letter by NDC to ICANN, since published by ICANN, containing written proof of Afilias’ violation of the auction rules. In its Dec. 21 ruling, the Panel made it unmistakably clear that it is ICANN – not a judge or a panel of arbitrators – who must first review all allegations of misconduct by the contention set, including the powerful evidence indicating that it is Afilias’ .web application, not NDC’s, that should be disqualified.

If Afilias’ motivation has been to avoid ICANN’s scrutiny of its own pre-auction misconduct, especially after exiting the registry business when it appears that its only significant asset is the .web application itself, then what we should expect to see next is for Afilias/AltaNovo to manufacture another delaying attack on the Final Decision. Perhaps this is why its litigation counsel has already written ICANN threatening to continue litigation “in all available fora whether within or outside of the United States of America.…”

It is long past time to put an end to this five-year campaign, which has interfered with ICANN’s duty to decide on the delegation of .web, harming the interests of the broader internet community. The new ruling obliges ICANN to take a decisive step in that direction.

The post IRP Panel Sanctions Afilias, Clears the Way for ICANN to Decide .web Disputes appeared first on Verisign Blog.

]]>
Verisign Q3 2021 The Domain Name Industry Brief: 364.6 Million Domain Name Registrations in the Third Quarter of 2021 https://blog.verisign.com/domain-names/verisign-q3-2021-domain-name-industry-brief/ Thu, 09 Dec 2021 22:00:00 +0000 https://blog.verisign.com/?p=6019 The Domain Name Industry Brief December 2021Today, we released the latest issue of The Domain Name Industry Brief, which shows that the third quarter of 2021 closed with 364.6 million domain name registrations across all top-level domains, a decrease of 2.7 million domain name registrations, or 0.7%, compared to the second quarter of 2021.1,2 Domain name registrations have decreased by 6.1 […]

The post Verisign Q3 2021 The Domain Name Industry Brief: 364.6 Million Domain Name Registrations in the Third Quarter of 2021 appeared first on Verisign Blog.

]]>

Today, we released the latest issue of The Domain Name Industry Brief, which shows that the third quarter of 2021 closed with 364.6 million domain name registrations across all top-level domains, a decrease of 2.7 million domain name registrations, or 0.7%, compared to the second quarter of 2021.1,2 Domain name registrations have decreased by 6.1 million, or 1.6%, year over year.1,2

Total domain name registrations across all TLDs in Q3 2021

Check out the latest issue of The Domain Name Industry Brief to see domain name stats from the third quarter of 2021, including:

The Domain Name Industry Brief this quarter also includes an overview of the ongoing community work to mitigate DNS security threats.

To see past issues of The Domain Name Industry Brief, please visit verisign.com/dnibarchives.


  1. The figure(s) includes domain names in the .tk ccTLD. .tk is a ccTLD that provides free domain names to individuals and businesses. Revenue is generated by monetizing expired domain names. Domain names no longer in use by the registrant or expired are taken back by the registry and the residual traffic is sold to advertising networks. As such, there are no deleted .tk domain names. The .tk zone reflected here is based on data from Q4 2020, which is the most recent data available. https://www.businesswire.com/news/home/20131216006048/en/Freenom-Closes-3M-Series-Funding#.UxeUGNJDv9s.
  2. The generic TLD, ngTLD and ccTLD data cited in the brief: (i) includes ccTLD Internationalized Domain Names (IDNs), (ii) is an estimate as of the time this brief was developed and (iii) is subject to change as more complete data is received. Some numbers in the brief may reflect standard rounding.

The post Verisign Q3 2021 The Domain Name Industry Brief: 364.6 Million Domain Name Registrations in the Third Quarter of 2021 appeared first on Verisign Blog.

]]>
Ongoing Community Work to Mitigate Domain Name System Security Threats https://blog.verisign.com/domain-names/ongoing-community-work-to-mitigate-domain-name-system-security-threats/ Tue, 07 Dec 2021 01:17:14 +0000 https://blog.verisign.com/?p=5985 For over a decade, the Internet Corporation for Assigned Names and Numbers (ICANN) and its multi-stakeholder community have engaged in an extended dialogue on the topic of DNS abuse, and the need to define, measure and mitigate DNS-related security threats. With increasing global reliance on the internet and DNS for communication, connectivity and commerce, the […]

The post Ongoing Community Work to Mitigate Domain Name System Security Threats appeared first on Verisign Blog.

]]>

For over a decade, the Internet Corporation for Assigned Names and Numbers (ICANN) and its multi-stakeholder community have engaged in an extended dialogue on the topic of DNS abuse, and the need to define, measure and mitigate DNS-related security threats. With increasing global reliance on the internet and DNS for communication, connectivity and commerce, the members of this community have important parts to play in identifying, reporting and mitigating illegal or harmful behavior, within their respective roles and capabilities.

As we consider the path forward on necessary and appropriate steps to improve mitigation of DNS abuse, it’s helpful to reflect briefly on the origins of this issue within ICANN, and to recognize the various and relevant community inputs to our ongoing work.

As a starting point, it’s important to understand ICANN’s central role in preserving the security, stability, resiliency and global interoperability of the internet’s unique identifier system, and also the limitations established within ICANN’s bylaws. ICANN’s primary mission is to ensure the stable and secure operation of the internet’s unique identifier systems, but as expressly stated in its bylaws, ICANN “shall not regulate (i.e., impose rules and restrictions on) services that use the internet’s unique identifiers or the content that such services carry or provide, outside the express scope of Section 1.1(a).” As such, ICANN’s role is important, but limited, when considering the full range of possible definitions of “DNS Abuse,” and developing a comprehensive understanding of security threat categories and the roles and responsibilities of various players in the internet infrastructure ecosystem is required.

In support of this important work, ICANN’s generic top-level domain (gTLD) contracted parties (registries and registrars) continue to engage with ICANN, and with other stakeholders and community interest groups, to address key factors related to effective and appropriate DNS security threat mitigation, including:

  • Determining the roles and responsibilities of the various service providers across the internet ecosystem;
  • Delineating categories of threats: content, infrastructure, illegal vs. harmful, etc.;
  • Understanding the precise operational and technical capabilities of various types of providers across the internet ecosystem;
  • Relationships, if any, that respective service providers have with individuals or entities responsible for creating and/or removing the illegal or abusive activity;
  • Role of third-party “trusted notifiers,” including government actors, that may play a role in identifying and reporting illegal and abusive behavior to the appropriate service provider;
  • Processes to ensure infrastructure providers can trust third-party notifiers to reliably identify and provide evidence of illegal or harmful content;
  • Promoting administrative and operational scalability in trusted notifier engagements;
  • Determining the necessary safeguards around liability, due process, and transparency to ensure domain name registrants have recourse when the DNS is used as a tool to police DNS security threats, particularly when related to content.
  • Supporting ICANN’s important and appropriate role in coordination and facilitation, particularly as a centralized source of data, tools, and resources to help and hold accountable those parties responsible for managing and maintaining the internet’s unique identifiers.
Figure 1: The Internet Ecosystem

Definitions of Online Abuse

To better understand the various roles, responsibilities and processes, it’s important to first define illegal and abusive online activity. While perspectives may vary across our wide range of interest groups, the emerging consensus on definitions and terminology is that these activities can be categorized as DNS Security Threats, Infrastructure Abuse, Illegal Content, or Abusive Content, with ICANN’s remit generally limited to the first two categories.

  • DNS Security Threats: defined as being “composed of five broad categories of harmful activity [where] they intersect with the DNS: malware, botnets, phishing, pharming, and spam when [spam] serves as a delivery mechanism for those other forms of DNS Abuse.”
  • Infrastructure Abuse: a broader set of security threats that can impact the DNS itself – including denial-of-service / distributed denial-of-service (DoS / DDoS) attacks, DNS cache poisoning, protocol-level attacks, and exploitation of implementation vulnerabilities.
  • Illegal Content: content that is unlawful and hosted on websites that are accessed via domain names in the global DNS. Examples might include the illegal sale of controlled substances or the distribution of child sexual abuse material (CSAM), and proven intellectual property infringement.
  • Abusive Content: is content hosted on websites using the domain name infrastructure that is deemed “harmful,” either under applicable law or norms, which could include scams, fraud, misinformation, or intellectual property infringement, where illegality has yet to be established by a court of competent jurisdiction.

Behavior within each of these categories constitutes abuse, and it is incumbent on members of the community to actively work to combat and mitigate these behaviors where they have the capability, expertise and responsibility to do so. We recognize the benefit of coordination with other entities, including ICANN within its bylaw-mandated remit, across their respective areas of responsibility.

ICANN Organization’s Efforts on DNS Abuse

The ICANN Organization has been actively involved in advancing work on DNS abuse, including the 2017 initiation of the Domain Abuse Activity Reporting (DAAR) system by the Office of the Chief Technology Officer. DAAR is a system for studying and reporting on domain name registration and security threats across top-level domain (TLD) registries, with an overarching purpose to develop a robust, reliable, and reproducible methodology for analyzing security threat activity, which the ICANN community may use to make informed policy decisions. The first DAAR reports were issued in January 2018 and they are updated monthly. Also in 2017, ICANN published its “Framework for Registry Operators to Address Security Threats,” which provides helpful guidance to registries seeking to improve their own DNS security posture.

The ICANN Organization also plays an important role in enforcing gTLD contract compliance and implementing policies developed by the community via its bottom-up, multi-stakeholder processes. For example, over the last several years, it has conducted registry and registrar audits of the anti-abuse provisions in the relevant agreements.

The ICANN Organization has also been a catalyst for increased community attention and action on DNS abuse, including initiating the DNS Security Facilitation Initiative Technical Study Group, which was formed to investigate mechanisms to strengthen collaboration and communication on security and stability issues related to the DNS. Over the last two years, there have also been multiple ICANN cross-community meeting sessions dedicated to the topic, including the most recent session hosted by the ICANN Board during its Annual General Meeting in October 2021. Also, in 2021, ICANN formalized its work on DNS abuse into a dedicated program within the ICANN Organization. These enforcement and compliance responsibilities are very important to ensure that all of ICANN’s contracted parties are living up to their obligations, and that any so-called “bad actors” are identified and remediated or de-accredited and removed from serving the gTLD registry or registrar markets.

The ICANN Organization continues to develop new initiatives to help mitigate DNS security threats, including: (1) expanding DAAR to integrate some country code TLDs, and to eventually include registrar-level reporting; (2) work on COVID domain names; (3) contributions to the development of a Domain Generating Algorithms Framework and facilitating waivers to allow registries and registrars to act on imminent security threats, including botnets at scale; and (4) plans for the ICANN Board to establish a DNS abuse caucus.

ICANN Community Inputs on DNS Abuse

As early as 2009, the ICANN community began to identify the need for additional safeguards to help address DNS abuse and security threats, and those community inputs increased over time and have reached a crescendo over the last two years. In the early stages of this community dialogue, the ICANN Governmental Advisory Committee, via its Public Safety Working Group, identified the need for additional mechanisms to address “criminal activity in the registration of domain names.” In the context of renegotiation of the Registrar Accreditation Agreement between ICANN and accredited registrars, and the development of the New gTLD Base Registry Agreement, the GAC played an important and influential role in highlighting this need, providing formal advice to the ICANN Board, which resulted in new requirements for gTLD registry and registrar operators, and new contractual compliance requirements for ICANN.

Following the launch of the 2012 round of new gTLDs, and the finalization of the 2013 amendments to the RAA, several ICANN bylaw-mandated review teams engaged further on the issue of DNS Abuse. These included the Competition, Consumer Trust and Consumer Choice Review Team (CCT-RT), and the second Security, Stability and Resiliency Review Team (SSR2-RT). Both final reports identified and reinforced the need for additional tools to help measure and combat DNS abuse. Also, during this timeframe, the GAC, along with the At-Large Advisory Committee and the Security and Stability Advisory Committee, issued their own respective communiques and formal advice to the ICANN Board reiterating or reinforcing past statements, and providing support for recommendations in the various Review Team reports. Most recently, the SSAC issued SAC 115 titled “SSAC Report on an Interoperable Approach to Addressing Abuse Handling in the DNS.” These ICANN community group inputs have been instrumental in bringing additional focus and/or clarity to the topic of DNS abuse, and have encouraged ICANN and its gTLD registries and registrars to look for improved mechanisms to address the types of abuse within our respective remits.

During 2020 and 2021, ICANN’s gTLD contracted parties have been constructively engaged with other parts of the ICANN community, and with ICANN Org, to advance improved understanding on the topic of DNS security threats, and to identify new and improved mechanisms to enhance the security, stability and resiliency of the domain name registration and resolution systems. Collectively, the registries and registrars have engaged with nearly all groups represented in the ICANN community, and we have produced important documents related to DNS abuse definitions, registry actions, registrar abuse reporting, domain generating algorithms, and trusted notifiers. These all represent significant steps forward in framing the context of the roles, responsibilities and capabilities of ICANN’s gTLD contracted parties, and, consistent with our Letter of Intent commitments, Verisign has been an important contributor, along with our partners, in these Contracted Party House initiatives.

In addition, the gTLD contracted parties and ICANN Organization continue to engage constructively on a number of fronts, including upcoming work on standardized registry reporting, which will help result in better data on abuse mitigation practices that will help to inform community work, future reviews, and provide better visibility into the DNS security landscape.

Other Groups and Actors Focused on DNS Security

It is important to note that groups outside of ICANN’s immediate multi-stakeholder community have contributed significantly to the topic of DNS abuse mitigation:

Internet & Jurisdiction Policy Network
The Internet & Jurisdiction Policy Network is a multi-stakeholder organization addressing the tension between the cross-border internet and national jurisdictions. Its secretariat facilitates a global policy process engaging over 400 key entities from governments, the world’s largest internet companies, technical operators, civil society groups, academia and international organizations from over 70 countries. The I&JP has been instrumental in developing multi-stakeholder inputs on issues such as trusted notifier, and Verisign has been a long-time contributor to that work since the I&JP’s founding in 2012.

DNS Abuse Institute
The DNS Abuse Institute was formed in 2021 to develop “outcomes-based initiatives that will create recommended practices, foster collaboration and develop industry-shared solutions to combat the five areas of DNS Abuse: malware, botnets, phishing, pharming, and related spam.” The Institute was created by Public Interest Registry, the registry operator for the .org TLD.

Global Cyber Alliance
The Global Cyber Alliance is a nonprofit organization dedicated to making the internet a safer place by reducing cyber risk. The GCA builds programs, tools and partnerships to sustain a trustworthy internet to enable social and economic progress for all.

ECO “topDNS” DNS Abuse Initiative
Eco is the largest association of the internet industry in Europe. Eco is a long-standing advocate of an “Internet with Responsibility” and of self-regulatory approaches, such as the DNS Abuse Framework. The eco “topDNS” initiative will help bring together stakeholders with an interest in combating and mitigating DNS security threats, and Verisign is a supporter of this new effort.

Other Community Groups
Verisign contributes to the anti-abuse, technical and policy communities: We continuously engage with ICANN and an array of other industry partners to help ensure the continued safe and secure operation of the DNS. For example, Verisign is actively engaged in anti-abuse, technical and policy communities such as the Anti-Phishing and Messaging, Malware and Mobile Anti-Abuse Working Groups, FIRST and the Internet Engineering Task Force.

What Verisign is Doing Today

As a leader in the domain name industry and DNS ecosystem, Verisign supports and has contributed to the cross-community efforts enumerated above. In addition, Verisign also engages directly by:

  • Monitoring for abuse: Protecting against abuse starts with knowing what is happening in our systems and services, in a timely manner, and being capable of detecting anomalous or abusive behavior, and then reacting to address it appropriately. Verisign works closely with a range of actors, including trusted notifiers, to help ensure our abuse mitigation actions are informed by sources with necessary subject matter expertise and procedural rigor.
  • Blocking and redirecting abusive domain names: Blocking certain domain names that have been identified by Verisign and/or trusted third parties as security threats, including botnets that leverage well-understood and characterized domain generation algorithms, helps us to protect our infrastructure and neutralize or otherwise minimize potential security and stability threats more broadly by remediating abuse enabled via domain names in our TLDs. For example, earlier this year, Verisign observed a botnet family that was responsible for such a disproportionate amount of total global DNS queries, we were compelled to act to remediate the botnet. This was referenced in Verisign’s Q1 2021 Domain Name Industry Brief Volume 18, Issue 2.
  • Avoiding disposable domain name registrations: While heavily discounted domain name pricing strategies may promote short-term sales, they may also attract a spectrum of registrants who might be engaged in abuse. Some security threats, including phishing and botnets, exploit the ability to register large numbers of ‘disposable’ domain names rapidly and cheaply. Accordingly, Verisign avoids marketing programs that would permit our TLDs to be characterized in this class of ‘disposable’ domains, that have been shown to attract miscreants and enable abusive behavior.
  • Maintaining a cooperative and responsive partnership with law enforcement and government agencies, and engagement with courts of relevant jurisdiction: To ensure the security, stability and resiliency of the DNS and the internet at large, we have developed and maintained constructive relationships with United States and international law enforcement and government agencies to assist in addressing imminent and ongoing substantial security threats to operational applications and critical internet infrastructure, as well as illegal activity associated with domain names.
  • Ensuring adherence of contractual obligations: Our contractual frameworks, including our registry policies and .com Registry-Registrar Agreements, help provide an effective legal framework that discourages abusive domain name registrations. We believe that fair and consistent enforcement of our policies helps to promote good hygiene within the registrar channel.
  • Entering into a binding Letter of Intent with ICANN that commits both parties to cooperate in taking a leadership role in combating security threats. This includes working with the ICANN community to determine the appropriate process for, and development and implementation of, best practices related to combating security threats; to educate the wider ICANN community about security threats; and support activities that preserve and enhance the security, stability and resiliency of the DNS. Verisign also made a substantial financial commitment in direct support of these important efforts.

Trusted Notifiers

An important concept and approach for mitigating illegal and abusive activity online is the ability to engage with and rely upon third-party “trusted notifiers” to identify and report such incidents at the appropriate level in the DNS ecosystem. Verisign has supported and been engaged in the good work of the Internet & Jurisdiction Policy Network since its inception, and we’re encouraged by its recent progress on trusted notifier framing. As mentioned earlier, there are some key questions to be addressed as we consider the viability of engaging trusted notifiers or building trusting notifier entities, to help mitigate illegal and abusive online activity.

Verisign’s recent experience with the U.S. government (NTIA and FDA) in combating illegal online opioid sales has been very helpful in illuminating a possible approach for third-party trusted notifier engagement. As noted, we have also benefited from direct engagement with the Internet Watch Foundation and law enforcement in combating CSAM. These recent examples of third-party engagement have underscored the value of a well-formed and executed notification regime, supported by clear expectations, due diligence and due process.

Discussions around trusted notifiers and an appropriate framework for engagement are under way, and Verisign recently engaged with other registries and registrars to lead the development of such a framework for further discussion within the ICANN community. We have significant expertise and experience as an infrastructure provider within our areas of technical, legal and contractual responsibility, and we are aggressive in protecting our operations from bad actors. But in matters related to illegal or abusive content, we need and value contributions from third parties to appropriately identify such behavior when supported by necessary evidence and due diligence. Precisely how such third-party notifications can be formalized and supported at scale is an open question, but one that requires further exploration and work. Verisign is committed to continuing to contribute to these ongoing discussions as we work to mitigate illegal and abusive threats to the security, stability and resiliency of the internet.

Conclusion

Over the last several years, DNS abuse and DNS-related security threat mitigation has been a very important topic of discussion in and around the ICANN community. In cooperation with ICANN, contracted parties, and other groups within the ICANN community, the DNS ecosystem including Verisign has been constructively engaged in developing a common understanding and practical work to advance these efforts, with a goal of meaningfully reducing the level and impact of malicious activity in the DNS. In addition to its contractual compliance functions, ICANN’s contributions have been important in helping to advance this important work and it continues to have a critical coordination and facilitation function that brings the ICANN community together on this important topic. The ICANN community’s recent focus on DNS abuse has been helpful, significant progress has been made, and more work is needed to ensure continued progress in mitigating DNS security threats. As we look ahead to 2022, we are committed to collaborating constructively with ICANN and the ICANN community to deliver on these important goals.

The post Ongoing Community Work to Mitigate Domain Name System Security Threats appeared first on Verisign Blog.

]]>
Industry Insights: RDAP Becomes Internet Standard https://blog.verisign.com/domain-names/industry-insights-rdap-becomes-internet-standard/ Thu, 16 Sep 2021 15:20:10 +0000 https://blog.verisign.com/?p=5967 Technical header image of codeThis article originally appeared in The Domain Name Industry Brief (Volume 18, Issue 3) Earlier this year, the Internet Engineering Task Force’s (IETF’s) Internet Engineering Steering Group (IESG) announced that several Proposed Standards related to the Registration Data Access Protocol (RDAP), including three that I co-authored, were being promoted to the prestigious designation of Internet […]

The post Industry Insights: RDAP Becomes Internet Standard appeared first on Verisign Blog.

]]>

This article originally appeared in The Domain Name Industry Brief (Volume 18, Issue 3)

Earlier this year, the Internet Engineering Task Force’s (IETF’s) Internet Engineering Steering Group (IESG) announced that several Proposed Standards related to the Registration Data Access Protocol (RDAP), including three that I co-authored, were being promoted to the prestigious designation of Internet Standard. Initially accepted as proposed standards six years ago, RFC 7480, RFC 7481, RFC 9082 and RFC 9083 now comprise the new Standard 95. RDAP allows users to access domain registration data and could one day replace its predecessor the WHOIS protocol. RDAP is designed to address some widely recognized deficiencies in the WHOIS protocol and can help improve the registration data chain of custody.

In the discussion that follows, I’ll look back at the registry data model, given the evolution from WHOIS to the RDAP protocol, and examine how the RDAP protocol can help improve upon the more traditional, WHOIS-based registry models.

Registration Data Directory Services Evolution, Part 1: The WHOIS Protocol

In 1998, Network Solutions was responsible for providing both consumer-facing registrar and back-end registry functions for the legacy .com, .net and .org generic top-level domains (gTLDs). Network Solutions collected information from domain name registrants, used that information to process domain name registration requests, and published both collected data and data derived from processing registration requests (such as expiration dates and status values) in a public-facing directory service known as WHOIS.

From Network Solution’s perspective as the registry, the chain of custody for domain name registration data involved only two parties: the registrant (or their agent) and Network Solutions. With the introduction of a Shared Registration System (SRS) in 1999, multiple registrars began to compete for domain name registration business by using the registry services operated by Network Solutions. The introduction of additional registrars and the separation of registry and registrar functions added parties to the chain of custody of domain name registration data. Information flowed from the registrant, to the registrar, and then to the registry, typically crossing multiple networks and jurisdictions, as depicted in Figure 1.

Flowchart of registration process. Information flowed from the registrant, to the registrar, and then to the registry.
Figure 1. Flow of information in early data registration process.

Registration Data Directory Services Evolution, Part 2: The RDAP Protocol

Over time, new gTLDs and new registries came into existence, new WHOIS services (with different output formats) were launched, and countries adopted new laws and regulations focused on protecting the personal information associated with domain name registration data. As time progressed, it became clear that WHOIS lacked several needed features, such as:

  • Standardized command structures
  • Output and error structures
  • Support for internationalization and localization
  • User identification
  • Authentication and access control

The IETF made multiple attempts to add features to WHOIS to address some of these issues, but none of them were widely adopted. A possible replacement protocol known as the Internet Registry Information Service (IRIS) was standardized in 2005, but it was not widely adopted. Something else was needed, and the IETF went back to work to produce what became known as RDAP.

RDAP was specified in a series of five IETF Proposed Standard RFC documents, including the following, all of which were published in March 2015:

  • RFC 7480, HTTP Usage in the Registration Data Access Protocol (RDAP)
  • RFC 7481, Security Services for the Registration Data Access Protocol (RDAP)
  • RFC 7482, Registration Data Access Protocol (RDAP) Query Format
  • RFC 7483, JSON Responses for the Registration Data Access Protocol (RDAP)
  • RFC 7484, Finding the Authoritative Registration Data (RDAP) Service

Only when RDAP was standardized did we start to see broad deployment of a possible WHOIS successor by domain name registries, domain name registrars and address registries.

The broad deployment of RDAP led to RFCs 7480 and 7481 becoming Internet Standard RFCs (part of Internet Standard 95) without modification in March 2021. As operators of registration data directory services implemented and deployed RDAP, they found places in the other specifications where minor corrections and clarifications were needed without changing the protocol itself. RFC 7482 was updated to become Internet Standard RFC 9082, which was published in June 2021. RFC 7483 was updated to become Internet Standard RFC 9083, which was also published in June 2021. All were added to Standard 95. As of the writing of this article, RFC 7484 is in the process of being reviewed and updated for elevation to Internet Standard status.

RDAP Advantages

Operators of registration data directory services who implemented RDAP can take advantage of key features not available in the WHOIS protocol. I’ve highlighted some of these important features in the table below.

RDAP FeatureBenefit
Standard, well-understood, and widely available HTTP transportRelatively easy to implement, deploy and operate using common web service tools, infrastructure and applications.
Securable via HTTPSHelps provide confidentiality for RDAP queries and responses, reducing the amount of information that is disclosed to monitors.
Structured output in JavaScript Object Notation (JSON)JSON is well-understood and tool friendly, which makes it easier for clients to parse and format responses from all servers without the need for software that’s customized for different service providers.
Easily extensibleDesigned to support the addition of new features without breaking existing implementations. This makes it easier to address future function needs with less risk of implementation incompatibility.
Internationalized output, with full support for Unicode character setsAllows implementations to provide human-readable inputs and outputs that are represented in a language appropriate to the local operating environment.
Referral capability, leveraging HTTP constructsProvides information to software clients that allow the client to retrieve additional information from other RDAP servers. This can be used to hide complexity from human users.
Support of standardized authenticationRDAP can take full advantage of all of the client identification, authentication and authorization methods that are available to web services. This means that RDAP can be used to provide the basic framework for differentiated access to registration data based on attributes associated with the user and the user’s query.

Verisign and RDAP

Verisign’s RDAP service, which was originally launched as an experimental implementation several years before gaining widespread adoption, allows users to look up records in the registry database for all registered .com, .net, .name, .cc and .tv domain names. It also supports Internationalized Domain Names (IDNs).

We at Verisign were pleased not only to see the IETF recognize the importance of RDAP by elevating it to an Internet Standard, but also that the protocol became a requirement for ICANN-accredited registrars and registries as of August 2019. Widespread implementation of the RDAP protocol makes registration data more secure, stable and resilient, and we are hopeful that the community will evolve the prescribed implementation of RDAP such that the full power of this rich protocol will be deployed.

You can learn more in the RDAP Help section of the Verisign website, and access helpful documents such as the RDAP technical implementation guide and the RDAP response profile.

The post Industry Insights: RDAP Becomes Internet Standard appeared first on Verisign Blog.

]]>
Afilias’ Rule Violations Continue to Delay .WEB https://blog.verisign.com/domain-names/afilias-rule-violations-continue-to-delay-web/ Tue, 14 Sep 2021 14:01:08 +0000 https://blog.verisign.com/?p=5970 Verisign LogoAs I noted on May 26, the final decision issued on May 20 in the Independent Review Process (IRP) brought by Afilias against the Internet Corporation for Assigned Names and Numbers (ICANN) rejected Afilias’ petition to nullify the results of the public auction for .WEB, and it further rejected Afilias’ demand to have it be […]

The post Afilias’ Rule Violations Continue to Delay .WEB appeared first on Verisign Blog.

]]>

As I noted on May 26, the final decision issued on May 20 in the Independent Review Process (IRP) brought by Afilias against the Internet Corporation for Assigned Names and Numbers (ICANN) rejected Afilias’ petition to nullify the results of the public auction for .WEB, and it further rejected Afilias’ demand to have it be awarded .WEB (at a price substantially lower than the winning bid). Instead, as we urged, the IRP Panel determined that the ICANN Board should move forward with reviewing the objections made about .WEB, and to make a decision on delegation thereafter.

Afilias and its counsel both issued press releases claiming victory in an attempt to put a positive spin on the decision. In contrast to this public position, Afilias then quickly filed a 68-page application asking the Panel to reverse its decision. This application is, however, not permitted by the arbitration rules – which expressly prohibit such requests for “do overs.”

In addition to Afilias’ facially improper application, there is an even more serious instance of rule-breaking now described in a July 23 letter from Nu Dot Co (NDC) to ICANN. This letter sets out in considerable detail how Afilias engaged in prohibited conduct during the blackout period immediately before the .WEB auction in 2016, in violation of the auction rules. The letter shows how this rule violation is more than just a technicality; it was part of a broader scheme to rig the auction. The attachments to the letter shed light on how, during the blackout period, Afilias offered NDC money to stop ICANN’s public auction in favor of a private process – which would in turn deny the broader internet community the benefit of the proceeds of a public auction.

Afilias’ latest application to reverse the Panel’s decision, like its pre-auction misconduct 5 years ago, has only led to unnecessary delay of the delegation of .WEB. It is long past time for this multi-year campaign to come to an end. The Panel’s unanimous ruling makes clear that it strongly agrees.

The post Afilias’ Rule Violations Continue to Delay .WEB appeared first on Verisign Blog.

]]>
Websites, Branded Email Remain Key to SMB Internet Services https://blog.verisign.com/getting-online/websites-branded-email-remain-key-to-smb-internet-services/ Thu, 09 Sep 2021 17:31:49 +0000 https://blog.verisign.com/?p=5948 Study Commissioned by Verisign Shows Websites Can Help Add Credibility and Drive New Business Businesses today have many options for interacting with customers online. The findings of our independent survey of online consumers suggest that websites and branded email continue to be critical components of many businesses’ online presence, essential to supporting consumer confidence and […]

The post Websites, Branded Email Remain Key to SMB Internet Services appeared first on Verisign Blog.

]]>

Study Commissioned by Verisign Shows Websites Can Help Add Credibility and Drive New Business

Businesses today have many options for interacting with customers online. The findings of our independent survey of online consumers suggest that websites and branded email continue to be critical components of many businesses’ online presence, essential to supporting consumer confidence and enabling effective interaction with customers.

The quantitative study, commissioned by Verisign and conducted in December 2019 and January 2020 by 451 Research, now a part of S&P Global Market Intelligence, surveyed 5,450 online consumers across key markets in North America, Latin America, Europe and Asia to help understand their sentiments on interacting with businesses online.

The survey was designed to arm service providers and registrars with an understanding of how the resources they provide to businesses can help create trust and deliver value to their customers.

Websites help add credibility

Among those surveyed, approximately two-thirds (66%) agreed that a business with its own website is more credible than one without. Likewise, a majority indicated that they would expect it to be more difficult to verify the identity of (56%), find online (55%) and contact (54%) a business that does not have its own website.

Certainly, this doesn’t suggest that businesses should abandon other online channels, such as social media and search engine efforts, to focus on a website-only approach. Instead, 64% of respondents said that a business with many points of online presence is more credible than a business with few.

Still, the study suggests that other online resources should complement, rather than replace, a small business’s own website. Respondents identified a business’s own website as being one of the most popular online methods for learning about (69%) and conducting transactions with (57%) businesses. Further, 71% of respondents reported being more likely to recommend a business with a professional website.

Taken together, these findings suggest that a website can help add credibility and drive new business.

Branded email supports customer communications

Trust is central to the relationship between a business and customers. This may be particularly true for online transactions (95% of survey respondents said they actively make purchases online), which require consumers to trust not only that the business will deliver the product or service for which they have paid, but also that it will not misuse payment or personal information.

A branded email address may be able to help, as an overwhelming number of respondents (85%) agreed that a business with a branded email address is more credible than one that uses a free email account. Respondents were more likely to have used a business’s branded email address (67%), than the telephone (56%) or social media (40%), to communicate with a business during the prior 12 months.

Key takeaway

For a small business, failing to be perceived as credible online could mean lost business not just today, but also in the future. A website and branded email address can help businesses add credibility and more effectively engage with consumers online.

Service providers offer a variety of website-building tools, email hosting solutions, and domain name registration services that can help businesses – whether just starting or well-established – to have a website and use a branded email.

Detailed survey results are available in 451 Research’s Black & White Paper Websites, Branded Email Remain Key to SMB Internet Services.


Verisign is a global wholesale provider of some of the world’s most recognized top-level domains, including .com and .net. For website building tools and email hosting solutions, contact a registrar. You can find a registrar here.

The post Websites, Branded Email Remain Key to SMB Internet Services appeared first on Verisign Blog.

]]>
Verisign Q2 2021 The Domain Name Industry Brief: 367.3 Million Domain Name Registrations in the Second Quarter of 2021 https://blog.verisign.com/domain-names/verisign-q2-2021-the-domain-name-industry-brief-367-3-million-domain-name-registrations-in-the-second-quarter-of-2021/ Thu, 02 Sep 2021 19:29:02 +0000 https://blog.verisign.com/?p=5934 Q2 2021 Domain Name Industry Brief Report CoverToday, we released the latest issue of The Domain Name Industry Brief, which shows that the second quarter of 2021 closed with 367.3 million domain name registrations across all top-level domains (TLDs), an increase of 3.8 million domain name registrations, or 1.0%, compared to the first quarter of 2021.1,2 Domain name registrations have decreased by […]

The post Verisign Q2 2021 The Domain Name Industry Brief: 367.3 Million Domain Name Registrations in the Second Quarter of 2021 appeared first on Verisign Blog.

]]>

Today, we released the latest issue of The Domain Name Industry Brief, which shows that the second quarter of 2021 closed with 367.3 million domain name registrations across all top-level domains (TLDs), an increase of 3.8 million domain name registrations, or 1.0%, compared to the first quarter of 2021.1,2 Domain name registrations have decreased by 2.8 million, or 0.7%, year over year.1,2

Q2 2021 closed with 367.3 million domain name registrations across all TLDs.

Check out the latest issue of The Domain Name Industry Brief to see domain name stats from the second quarter of 2021, including:

This quarter’s The Domain Name Industry Brief also includes an overview of how Registration Data Access Protocol (RDAP) improves upon the legacy WHOIS protocol.

To see past issues of The Domain Name Industry Brief, please visit verisign.com/dnibarchives.


1. The figure(s) includes domain names in the .tk ccTLD. .tk is a ccTLD that provides free domain names to individuals and businesses. Revenue is generated by monetizing expired domain names. Domain names no longer in use by the registrant or expired are taken back by the registry and the residual traffic is sold to advertising networks. As such, there are no deleted .tk domain names. https://www.businesswire.com/news/home/20131216006048/en/Freenom-Closes-3M-Series-Funding#.UxeUGNJDv9s.

2. The generic top-level domain (gTLD), new gTLD (ngTLD) and ccTLD data cited in the brief: (i) includes ccTLD Internationalized Domain Names (IDNs), (ii) is an estimate as of the time this brief was developed and (iii) is subject to change as more complete data is received. Some numbers in the brief may reflect standard rounding.

The post Verisign Q2 2021 The Domain Name Industry Brief: 367.3 Million Domain Name Registrations in the Second Quarter of 2021 appeared first on Verisign Blog.

]]>
The Test of Time at Internet Scale: Verisign’s Danny McPherson Recognized with ACM SIGCOMM Award https://blog.verisign.com/domain-names/the-test-of-time-at-internet-scale-verisigns-danny-mcpherson-recognized-with-acm-sigcomm-award/ Fri, 27 Aug 2021 14:04:05 +0000 https://blog.verisign.com/?p=5926 Header image: Flashing technical imageryThe global internet, from the perspective of its billions of users, has often been envisioned as a cloud — a shapeless structure that connects users to applications and to one another, with the internal details left up to the infrastructure operators inside. From the perspective of the infrastructure operators, however, the global internet is a […]

The post The Test of Time at Internet Scale: Verisign’s Danny McPherson Recognized with ACM SIGCOMM Award appeared first on Verisign Blog.

]]>

The global internet, from the perspective of its billions of users, has often been envisioned as a cloud — a shapeless structure that connects users to applications and to one another, with the internal details left up to the infrastructure operators inside.

From the perspective of the infrastructure operators, however, the global internet is a network of networks. It’s a complex set of connections among network operators, application platforms, content providers and other parties.

And just as the total amount of global internet traffic continues to grow, so too does the shape and structure of the internet — the internal details of the cloud — continue to evolve.

At the Association for Computing Machinery’s Special Interest Group on Data Communications (ACM SIGCOMM) conference in 2010, researchers at Arbor Networks and the University of Michigan, including Danny McPherson, now executive vice president and chief security officer at Verisign, published one of the first papers to analyze the internal structure of the internet in detail.

The study, entitled “Internet Inter-Domain Traffic,” drew from two years of measurements involving more than 200 exabytes of data.

One of the paper’s key observations was the emergence of a “global internet core” of a relatively small number of large application and content providers that had become responsible for the majority of the traffic between different parts of the internet — in contrast to the previous topology where large network operators were the primary source.

The authors’ conclusion: “we expect the trend towards internet inter-domain traffic consolidation to continue and even accelerate.”

The paper’s predictions of internet traffic and topology trends proved out over the past decade, as confirmed by one of the paper’s authors, Craig Labovitz, in a 2019 presentation that reiterated the paper’s main findings: the internet is “getting bigger by traffic volume” while also “rapidly getting smaller by concentration of content sources.”

This week, the ACM SIGCOMM 2021 conference series recognized the enduring value of the research with the prestigious Test of Time Paper Award, given to a paper that “deemed to be an outstanding paper whose contents are still a vibrant and useful contribution today.”

Internet measurement research is particularly relevant to Domain Name System (DNS) operators such as Verisign. To optimize the deployment of their services, DNS operators need to know where DNS query traffic is most likely to be exchanged in the coming years. Insights into the internal structure of the internet can help DNS operators ensure the ongoing security, stability and resiliency of their services, for the benefit both of other infrastructure operators who depend on DNS, and the billions of users who connect online every day.

Congratulations to Danny and co-authors Craig Labovitz, Scott Iekel-Johnson, Jon Oberheide and Farnam Jahanian on receiving this award, and thanks to ACM SIGCOMM for its recognition of the research. If you’re curious about what evolutionary developments Danny and others at Verisign are envisioning today about the internet of the future, subscribe to the Verisign blog, and follow us on Twitter and LinkedIn.

The post The Test of Time at Internet Scale: Verisign’s Danny McPherson Recognized with ACM SIGCOMM Award appeared first on Verisign Blog.

]]>
Industry Insights: Verisign, ICANN and Industry Partners Collaborate to Combat Botnets https://blog.verisign.com/domain-names/industry-insights-verisign-icann-and-industry-partners-collaborate-to-combat-botnets/ Tue, 22 Jun 2021 17:02:39 +0000 https://blog.verisign.com/?p=5902 An image of multiple botnets for the Verisign blog "Industry Insights: Verisign, ICANN and Industry Partners Collaborate to Combat Botnets"Note: This article originally appeared in Verisign’s Q1 2021 Domain Name Industry Brief. This article expands on observations of a botnet traffic group at various levels of the Domain Name System (DNS) hierarchy, presented at DNS-OARC 35. Addressing DNS abuse and maintaining a healthy DNS ecosystem are important components of Verisign’s commitment to being a […]

The post Industry Insights: Verisign, ICANN and Industry Partners Collaborate to Combat Botnets appeared first on Verisign Blog.

]]>

Note: This article originally appeared in Verisign’s Q1 2021 Domain Name Industry Brief.

This article expands on observations of a botnet traffic group at various levels of the Domain Name System (DNS) hierarchy, presented at DNS-OARC 35.

Addressing DNS abuse and maintaining a healthy DNS ecosystem are important components of Verisign’s commitment to being a responsible steward of the internet. We continuously engage with the Internet Corporation for Assigned Names and Numbers (ICANN) and other industry partners to help ensure the secure, stable and resilient operation of the DNS.

Based on recent telemetry data from Verisign’s authoritative top-level domain (TLD) name servers, Verisign observed a widespread botnet responsible for a disproportionate amount of total global DNS queries – and, in coordination with several registrars, registries and ICANN, acted expeditiously to remediate it.

Just prior to Verisign taking action to remediate the botnet, upwards of 27.5 billion queries per day were being sent to Verisign’s authoritative TLD name servers, accounting for roughly 10% of Verisign’s total DNS traffic. That amount of query volume in most DNS environments would be considered a sustained distributed denial-of-service (DDoS) attack.

These queries were associated with a particular piece of malware that emerged in 2018, spreading throughout the internet to create a global botnet infrastructure. Botnets provide a substrate for malicious actors to theoretically perform all manner of malicious activity – executing DDoS attacks, exfiltrating data, sending spam, conducting phishing campaigns or even installing ransomware. This is the result of the malware’s ability to download and execute any other type of payload the malicious actor desires.

Malware authors often apply various forms of evasion techniques to protect their botnets from being detected and remediated. A Domain Generation Algorithm (DGA) is an example of such an evasion technique.

DGAs are seen in various families of malware that periodically generate a number of domain names, which can be used as rendezvous points for botnet command-and-control servers. By using a DGA to build the list of domain names, the malicious actor makes it more difficult for security practitioners to identify what domain names will be used and when. Only by exhaustively reverse-engineering a piece of malware can the definitive set of domain names be ascertained.

The choices made by miscreants to tailor malware DGAs directly influences the DGAs’ ability to evade detection. For instance, electing to use more TLDs and a large number of domain names in a given time period makes the malware’s operation more difficult to disrupt; however, this approach also increases the amount of network noise, making it easier to identify anomalous traffic patterns by security and network teams. Likewise, a DGA that uses a limited number of TLDs and domain names will generate significantly less network noise but is more fragile and susceptible to remediation.

Botnets that implement DGA algorithms or utilize domain names clearly represent an “abuse of the DNS,” opposed to other types of abuse that are executed “via the DNS,” such as phishing. This is an important distinction the DNS community should consider as it continues to refine the scope of DNS abuse and how remediation of the various abuses can be effectuated.

The remediation of domain names used by botnets as rendezvous points poses numerous operational challenges and insights. The set of domain names needs to be identified and investigated to determine their current registration status. Risk assessments must be evaluated on registered domain names to determine if additional actions should be performed, such as sending registrar notifications, issuing requests to transfer domain names, adding Extensible Provisioning Protocol (EPP) hold statuses, altering delegation records, etc. There are also timing and coordination elements that must be balanced with external entities, such as ICANN, law enforcement, Computer Emergency Readiness Teams (CERTs) and contracted parties, including registrars and registries. Other technical decisions also need to be considered, designed and deployed to achieve the desired remediation goal.

After coordinating with ICANN, and several registrars and registries, Verisign registered the remaining available botnet domain names and began a three-phase plan to sinkhole those domain names. Ultimately, this remediation effort would reduce the traffic sent to Verisign authoritative name servers and effectively eliminate the botnet’s ability to use command-and-control domain names within Verisign-operated TLDs.

Figure 1 below shows the amount of botnet traffic Verisign authoritative name servers received prior to intervention, and throughout the process of registering, delegating and sinkholing the botnet domain names.

Figure 1 below shows the amount of botnet traffic Verisign authoritative name servers received prior to intervention, and throughout the process of registering, delegating and sinkholing the botnet domain names.
Figure 1: The botnet’s DNS query volume at Verisign authoritative name servers.

Phase one was executed on Dec. 21, 2020, in which 100 .cc domain names were configured to resolve to Verisign-operated sinkhole servers. Subsequently, traffic at Verisign authoritative name servers quickly decreased. The second group of domain names contained 500 .com and .net domain names, which were sinkholed on Jan. 7, 2021. Again, traffic volume at Verisign authoritative name servers quickly decreased. The final group of 879 .com and .net domain names were sinkholed on Jan. 13, 2021. By the end of phase three, the cumulative DNS traffic reduction surpassed 25 billion queries per day. Verisign reserved approximately 10 percent of the botnet domain names to remain on serverHold as a placebo/control-group to better understand sinkholing effects as they relate to query volume at the child and parent zones. Verisign believes that sinkholing the remaining domain names would further reduce authoritative name server traffic by an additional one billion queries.

This botnet highlights the remarkable Pareto-like distribution of DNS query traffic, in which a few thousand domain names that span namespaces containing more than 165 million domain names, demand a vastly disproportionate amount of DNS resources.

What causes the amplification of DNS traffic volume for non-existent domain names to occur at the upper levels of the DNS hierarchy? Verisign is conducting a variety of measurements on the sinkholed botnet domain names to better understand the caching behavior of the resolver population. We are observing some interesting traffic changes at the TLD and root name servers when time to live (TTL) and response codes are altered at the sinkhole servers. Stay tuned.

In addition to remediating this botnet in late 2020 and into early 2021, Verisign extended its already four-year endeavor to combat the Avalanche botnet family. Since 2016, the Avalanche botnet had been significantly impacted due to actions taken by Verisign and an international consortium of law enforcement, academic and private organizations. However, many of the underlying Avalanche-compromised machines are still not remediated, and the threat from Avalanche could increase again if additional actions are not taken. To prevent this from happening, Verisign, in coordination with ICANN and other industry partners, is using a variety of tools to ensure Avalanche command-and-control domain names cannot be used in Verisign-operated TLDs.

Botnets are a persistent issue. And as long as they exist as a threat to the security, stability and resiliency of the DNS, cross-industry coordination and collaboration will continue to lie at the core of combating them.

This piece was co-authored by Matt Thomas and Duane Wessels, distinguished engineers at Verisign.

The post Industry Insights: Verisign, ICANN and Industry Partners Collaborate to Combat Botnets appeared first on Verisign Blog.

]]>
Verisign Q1 2021 Domain Name Industry Brief: 363.5 Million Domain Name Registrations in the First Quarter of 2021 https://blog.verisign.com/domain-names/verisign-q1-2021-domain-name-industry-brief-363-5-million-domain-name-registrations-in-the-first-quarter-of-2021/ Thu, 03 Jun 2021 19:22:08 +0000 https://blog.verisign.com/?p=5882 Q1 2021 Domain Name Industry Brief Report CoverToday, we released the latest issue of the Domain Name Industry Brief, which shows that the first quarter of 2021 closed with 363.5 million domain name registrations across all top-level domains (TLDs), a decrease of 2.8 million domain name registrations, or 0.8%, compared to the fourth quarter of 2020.1,2 Domain name registrations have decreased by […]

The post Verisign Q1 2021 Domain Name Industry Brief: 363.5 Million Domain Name Registrations in the First Quarter of 2021 appeared first on Verisign Blog.

]]>

Today, we released the latest issue of the Domain Name Industry Brief, which shows that the first quarter of 2021 closed with 363.5 million domain name registrations across all top-level domains (TLDs), a decrease of 2.8 million domain name registrations, or 0.8%, compared to the fourth quarter of 2020.1,2 Domain name registrations have decreased by 3.3 million, or 0.9%, year over year.1,2

Q1 2021 domain name registrations across all top-level domains

Check out the latest issue of the Domain Name Industry Brief to see domain name stats from the first quarter of 2021, including:

This quarter’s Domain Name Industry Brief also includes a look at a recent collaboration between Verisign, ICANN and industry partners to combat botnets.

To see past issues of the Domain Name Industry Brief, please visit verisign.com/dnibarchives.


1. The figure(s) includes domain names in the .tk ccTLD. .tk is a ccTLD that provides free domain names to individuals and businesses. Revenue is generated by monetizing expired domain names. Domain names no longer in use by the registrant or expired are taken back by the registry and the residual traffic is sold to advertising networks. As such, there are no deleted .tk domain names. https://www.businesswire.com/news/home/20131216006048/en/Freenom-Closes-3M-Series-Funding#.UxeUGNJDv9s.

2. The generic top-level domain (gTLD), new gTLD (ngTLD) and ccTLD data cited in the brief: (i) includes ccTLD Internationalized Domain Names (IDNs), (ii) is an estimate as of the time this brief was developed and (iii) is subject to change as more complete data is received. Some numbers in the brief may reflect standard rounding.

The internet had 363.5 million domain name registrations at the end of Q1 2021.

The post Verisign Q1 2021 Domain Name Industry Brief: 363.5 Million Domain Name Registrations in the First Quarter of 2021 appeared first on Verisign Blog.

]]>
IRP Panel Dismisses Afilias’ Claims to Reverse .WEB Auction and Award .WEB to Afilias https://blog.verisign.com/domain-names/irp-panel-dismisses-afilias-claims-to-reverse-web-auction-and-award-web-to-afilias/ Wed, 26 May 2021 15:27:47 +0000 https://blog.verisign.com/?p=5870 Verisign LogoOn Thursday, May 20, a final decision was issued in the Independent Review Process (IRP) brought by Afilias against the Internet Corporation for Assigned Names and Numbers (ICANN), rejecting Afilias’ petition to nullify the results of the July 27, 2016 public auction for the .WEB new generic top level domain (gTLD) and to award .WEB […]

The post IRP Panel Dismisses Afilias’ Claims to Reverse .WEB Auction and Award .WEB to Afilias appeared first on Verisign Blog.

]]>

On Thursday, May 20, a final decision was issued in the Independent Review Process (IRP) brought by Afilias against the Internet Corporation for Assigned Names and Numbers (ICANN), rejecting Afilias’ petition to nullify the results of the July 27, 2016 public auction for the .WEB new generic top level domain (gTLD) and to award .WEB to Afilias at a substantially lower, non-competitive price. Nu Dotco, LLC (NDC) submitted the highest bid at the auction and was declared the winner, over Afilias’ lower, losing bid. Despite Afilias’ repeated objections to participation by NDC or Verisign in the IRP, the Panel ordered that NDC and Verisign could participate in the IRP in a limited way each as amicus curiae.

Consistent with NDC, Verisign and ICANN’s position in the IRP, the Order dismisses “the Claimant’s [Afilias’] request that Respondent [ICANN] be ordered by the Panel to disqualify NDC’s bid for .WEB, proceed with contracting the Registry Agreement for .WEB with the Claimant in accordance with the New gTLD Program Rules, and specify the bid price to be paid by the Claimant.” Contrary to Afilias’ position, all objections to the auction are referred to ICANN for determination. This includes Afilias’ objections as well as objections by NDC that Afilias violated the auction rules by engaging in secret discussions during the Blackout Period under the Program Rules.

The Order Dismisses All of Afilias’ Claims of Violations by NDC or Verisign

Afilias’ claims for relief were based on its allegation that NDC violated the New gTLD Program Rules by entering into an agreement with Verisign, under which Verisign provided funds for NDC’s participation in the .WEB auction in exchange for NDC’s commitment, if it prevailed at the auction and entered into a registry agreement with ICANN, to assign its .WEB registry agreement to Verisign upon ICANN’s consent to the assignment. As the Panel determined, the relief requested by Afilias far exceeded the scope of proper IRP relief provided for in ICANN’s Bylaws, which limit an IRP to a determination whether or not ICANN has exceeded its mission or otherwise failed to comply with its Articles of Incorporation and Bylaws.

Issued two and a half years after Afilias initiated its IRP, the Panel’s decision unequivocally rejects Afilias’ attempt to misuse the IRP to rule on claims of NDC or Verisign misconduct or obtain the .WEB gTLD for itself despite its losing bid. The Panel held that it is for ICANN, which has the requisite knowledge, expertise, and experience, to determine whether NDC’s contract with Verisign violated ICANN’s Program Rules. The Panel further determined that it would be improper for the Panel to dictate what should be the consequences of an alleged violation of the rules contained in the gTLD Applicant Guidebook, if any took place. The Panel therefore denied Afilias’ requests for a binding declaration that ICANN must disqualify NDC’s bid for violating the Guidebook rules and award .WEB to Afilias.

Despite pursuing its claims in the IRP for over two years — at a cost of millions of dollars — Afilias failed to offer any evidence to support its allegations that NDC improperly failed to update its application and/or assigned its application to Verisign. Instead, the evidence was to the contrary. Indeed, Afilias failed to offer testimony from a single Afilias witness during the hearing on the merits, including witnesses with direct knowledge of relevant events and industry practices. It is apparent that Afilias failed to call as witnesses any of its own officers or employees because, testifying under penalty of perjury, they would have been forced to contradict the false allegations advanced by Afilias during the IRP. By contrast, ICANN, NDC and Verisign each supported their respective positions appropriately by calling witnesses to testify and be subject to cross-examination by the three-arbitrator panel and Afilias, under oath, with respect to the facts refuting Afilias’ claims.

Afilias also argued in the IRP that ICANN is a competition regulator and that ICANN’s commitment, contained in its Bylaws, to promote competition required ICANN to disqualify NDC’s bid for the .WEB gTLD because NDC’s contract with Verisign may lead to Verisign’s operation of .WEB. The Panel rejected Afilias’ claim, agreeing with ICANN and Verisign that “ICANN does not have the power, authority, or expertise to act as a competition regulator by challenging or policing anticompetitive transactions or conduct.” The Panel found ICANN’s evidence “compelling” that it fulfills its mission to promote competition through the expansion of the domain name space and facilitation of innovative approaches to the delivery of domain name registry services, not by acting as an antitrust regulator. The Panel quoted Afilias’ own statements to this effect, which were made outside of the IRP proceedings when Afilias had different interests.

Although the Panel rejected Afilias’ Guidebook and competition claims, it did find that the manner in which ICANN addressed complaints about the .WEB matter did not meet all of the commitments in its Bylaws. But even so, Afilias was awarded only a small portion of the legal fees it hoped to recover from ICANN.

Moving Forward with .WEB

It is now up to ICANN to move forward expeditiously to determine, consistent with its Bylaws, the validity of any objections under the New gTLD Program Rules in connection with the .WEB auction, including NDC and Verisign’s position that Afilias should be disqualified from making any further objections to NDC’s application.

As Verisign and NDC pointed out in 2016, the evidence during the IRP establishes that collusive conduct by Afilias in connection with the auction violated the Guidebook. The Guidebook and Auction Rules both prohibit applicants within a contention set from discussing “bidding strategies” or “settlement” during a designated Blackout Period in advance of an auction. Violation of the Blackout Period is a “serious violation” of ICANN’s rules and may result in forfeiture of an applicant’s application. The evidence adduced in the IRP proves that Afilias committed such violations and should be disqualified. On July 22, just four days before the public ICANN auction for .WEB, Afilias contacted NDC, following Afilias’ discussions with other applicants, to try to negotiate a private auction if ICANN would delay the public auction. Afilias knew the Blackout Period was in effect, but nonetheless violated it in an attempt to persuade NDC to participate in a private auction. Under the settlement Afilias proposed, Afilias would make millions of dollars even if it lost the auction, rather than auction proceeds being used for the internet community through the investment of such proceeds by ICANN as determined by the community.

All of the issues raised during the IRP were the subject of extensive briefing, evidentiary submissions and live testimony during the hearing on the merits, providing ICANN with a substantial record on which to render a determination with respect to .WEB and proceed forward with delegation of the new gTLD. Verisign stands ready to assist ICANN in any way we can to quickly resolve this matter so that domain names within the .WEB gTLD can finally be made available to businesses and consumers.

As a final observation: Afilias no longer operates a registry business, and has neither the platform, organization, nor necessary consents from ICANN, to support one. Inconsistent with Afilias’ claims in the IRP, Afilias transferred its entire registry business to Donuts during the pendency of the IRP. Although long in the works, the sale was not disclosed by Afilias either before or during the IRP hearings, nor, remarkably, did Afilias produce any company witness for examination who might have disclosed the sale to the panel of arbitrators or others. Based on a necessary public disclosure of the Donuts sale after the hearings and before entry of the Panel’s Order, the Panel included in its final Order a determination that it is for ICANN to determine whether the Afilias’ sale is itself a basis for a denial of Afilias’ claims with respect to .WEB.

Verisign’s analysis of the Independent Review Process decision regarding the awarding of the .web top level domain.

The post IRP Panel Dismisses Afilias’ Claims to Reverse .WEB Auction and Award .WEB to Afilias appeared first on Verisign Blog.

]]>
Verisign Support for AAPI Communities and COVID Relief in India https://blog.verisign.com/domain-names/verisign-support-for-aapi-communities-and-covid-relief-in-india/ Wed, 19 May 2021 17:39:43 +0000 https://blog.verisign.com/?p=5837 Verisign LogoAt Verisign we have a commitment to making a positive and lasting impact on the global internet community, and on the communities in which we live and work. This commitment guided our initial efforts to help these communities respond to and recover from the effects of the COVID-19 pandemic, over a year ago. And at […]

The post Verisign Support for AAPI Communities and COVID Relief in India appeared first on Verisign Blog.

]]>

At Verisign we have a commitment to making a positive and lasting impact on the global internet community, and on the communities in which we live and work.

This commitment guided our initial efforts to help these communities respond to and recover from the effects of the COVID-19 pandemic, over a year ago. And at the end of 2020, our sense of partnership with our local communities helped shape our efforts to alleviate COVID-related food insecurity in the areas where we have our most substantial footprint. This same sense of community is reflected in our partnership with Virginia Ready, which aims to help individuals in our home State of Virginia access training and certification to pivot to new careers in the technology sector.

We also believe that our team is one of our most important assets. We particularly value the diverse origins of our people; we have colleagues from all over the world who, in turn, are closely connected to their own communities both in the United States and elsewhere. A significant proportion of our staff are of either Asian American and Pacific Islander (AAPI) or South Asian origin, and today we are pleased to announce two charitable contributions, via our Verisign Cares program, directly related to these two communities.

First, Verisign is pleased to associate ourselves with the Stand with Asian Americans initiative, launched by AAPI business leaders in response to recent and upsetting episodes of aggression toward their community. Verisign supports this initiative and the pledge for which it stands, and has made a substantial contribution to the initiative’s partner, the Asian Pacific Fund, to help uplift the AAPI community.

Second, and after consultation with our staff, we have directed significant charitable contributions to organizations helping to fight the worsening wave of COVID-19 in India. Through Direct Relief we will be helping to provide oxygen and other medical equipment to hospitals, while through GiveIndia we will be supporting families in India impacted by COVID-19.

The ‘extended Verisign family’ of our employees, and their families and their communities, means a tremendous amount to us – it is only thanks to our talented and dedicated people that we are able to continue to fulfill our mission of enabling the world to connect online with reliability and confidence, anytime, anywhere.

Verisign expands its community support initiatives, with contributions to COVID-19 relief in India and to the Stand with Asian Americans initiative.

The post Verisign Support for AAPI Communities and COVID Relief in India appeared first on Verisign Blog.

]]>
Verisign Q4 2020 Domain Name Industry Brief: 366.3 Million Domain Name Registrations in the Fourth Quarter of 2020 https://blog.verisign.com/domain-names/verisign-q4-2020-domain-name-industry-brief-366-3-million-domain-name-registrations-in-the-fourth-quarter-of-2020/ Thu, 04 Mar 2021 21:21:06 +0000 https://blog.verisign.com/?p=5810 Today, we released the latest issue of the Domain Name Industry Brief, which shows that the fourth quarter of 2020 closed with 366.3 million domain name registrations across all top-level domains (TLDs), a decrease of 4.4 million domain name registrations, or 1.2 percent, compared to the third quarter of 2020.1,2 Domain name registrations have grown […]

The post Verisign Q4 2020 Domain Name Industry Brief: 366.3 Million Domain Name Registrations in the Fourth Quarter of 2020 appeared first on Verisign Blog.

]]>

Today, we released the latest issue of the Domain Name Industry Brief, which shows that the fourth quarter of 2020 closed with 366.3 million domain name registrations across all top-level domains (TLDs), a decrease of 4.4 million domain name registrations, or 1.2 percent, compared to the third quarter of 2020.1,2 Domain name registrations have grown by 4.0 million, or 1.1 percent, year over year.1,2

366.3 MILLION DOMAIN NAME REGISTRATIONS IN THE FOURTH QUARTER OF 2020

Check out the latest issue of the Domain Name Industry Brief to see domain name stats from the fourth quarter of 2020, including:

This quarter’s Domain Name Industry Brief also includes a closer look at encryption and what new DNS capabilities may be possible with a “minimize at the root and top-level domain, encrypt when needed elsewhere” approach to DNS encryption.

To see past issues of the Domain Name Industry Brief, please visit Verisign.com/DNIBArchives.


1. The figure(s) includes domain names in the .tk ccTLD. .tk is a ccTLD that provides free domain names to individuals and businesses. Revenue is generated by monetizing expired domain names. Domain names no longer in use by the registrant or expired are taken back by the registry and the residual traffic is sold to advertising networks. As such, there are no deleted .tk domain names. https://www.businesswire.com/news/home/20131216006048/en/Freenom-Closes-3M-Series-Funding#.UxeUGNJDv9s.

2. The generic top-level domain (gTLD), new gTLD (ngTLD) and ccTLD data cited in the brief: (i) includes ccTLD Internationalized Domain Names (IDNs), (ii) is an estimate as of the time this brief was developed and (iii) is subject to change as more complete data is received. Some numbers in the brief may reflect standard rounding.

The post Verisign Q4 2020 Domain Name Industry Brief: 366.3 Million Domain Name Registrations in the Fourth Quarter of 2020 appeared first on Verisign Blog.

]]>
Information Protection for the Domain Name System: Encryption and Minimization https://blog.verisign.com/security/information-protection-for-the-domain-name-system-encryption-and-minimization/ Wed, 27 Jan 2021 16:01:50 +0000 https://blog.verisign.com/?p=5774 This is the final in a multi-part series on cryptography and the Domain Name System (DNS). In previous posts in this series, I’ve discussed a number of applications of cryptography to the DNS, many of them related to the Domain Name System Security Extensions (DNSSEC). In this final blog post, I’ll turn attention to another […]

The post Information Protection for the Domain Name System: Encryption and Minimization appeared first on Verisign Blog.

]]>

This is the final in a multi-part series on cryptography and the Domain Name System (DNS).

In previous posts in this series, I’ve discussed a number of applications of cryptography to the DNS, many of them related to the Domain Name System Security Extensions (DNSSEC).

In this final blog post, I’ll turn attention to another application that may appear at first to be the most natural, though as it turns out, may not always be the most necessary: DNS encryption. (I’ve also written about DNS encryption as well as minimization in a separate post on DNS information protection.)

DNS Encryption

In 2014, the Internet Engineering Task Force (IETF) chartered the DNS PRIVate Exchange (dprive) working group to start work on encrypting DNS queries and responses exchanged between clients and resolvers.

That work resulted in RFC 7858, published in 2016, which describes how to run the DNS protocol over the Transport Layer Security (TLS) protocol, also known as DNS over TLS, or DoT.

DNS encryption between clients and resolvers has since gained further momentum, with multiple browsers and resolvers supporting DNS over Hypertext Transport Protocol Security (HTTPS), or DoH, with the formation of the Encrypted DNS Deployment Initiative, and with further enhancements such as oblivious DoH.

The dprive working group turned its attention to the resolver-to-authoritative exchange during its rechartering in 2018. And in October of last year, ICANN’s Office of the CTO published its strategy recommendations for the ICANN-managed Root Server (IMRS, i.e., the L-Root Server), an effort motivated in part by concern about potential “confidentiality attacks” on the resolver-to-root connection.

From a cryptographer’s perspective the prospect of adding encryption to the DNS protocol is naturally quite interesting. But this perspective isn’t the only one that matters, as I’ve observed numerous times in previous posts.

Balancing Cryptographic and Operational Considerations

A common theme in this series on cryptography and the DNS has been the question of whether the benefits of a technology are sufficient to justify its cost and complexity.

This question came up not only in my review of two newer cryptographic advances, but also in my remarks on the motivation for two established tools for providing evidence that a domain name doesn’t exist.

Recall that the two tools — the Next Secure (NSEC) and Next Secure 3 (NSEC3) records — were developed because a simpler approach didn’t have an acceptable risk / benefit tradeoff. In the simpler approach, to provide a relying party assurance that a domain name doesn’t exist, a name server would return a response, signed with its private key, “<name> doesn’t exist.”

From a cryptographic perspective, the simpler approach would meet its goal: a relying party could then validate the response with the corresponding public key. However, the approach would introduce new operational risks, because the name server would now have to perform online cryptographic operations.

The name server would not only have to protect its private key from compromise, but would also have to protect the cryptographic operations from overuse by attackers. That could open another avenue for denial-of-service attacks that could prevent the name server from responding to legitimate requests.

The designers of DNSSEC mitigated these operational risks by developing NSEC and NSEC3, which gave the option of moving the private key and the cryptographic operations offline, into the name server’s provisioning system. Cryptography and operations were balanced by this better solution. The theme is now returning to view through the recent efforts around DNS encryption.

Like the simpler initial approach for authentication, DNS encryption may meet its goal from a cryptographic perspective. But the operational perspective is important as well. As designers again consider where and how to deploy private keys and cryptographic operations across the DNS ecosystem, alternatives with a better balance are a desirable goal.

Minimization Techniques

In addition to encryption, there has been research into other, possibly lower-risk alternatives that can be used in place of or in addition to encryption at various levels of the DNS.

We call these techniques collectively minimization techniques.

Qname Minimization

In “textbook” DNS resolution, a resolver sends the same full domain name to a root server, a top-level domain (TLD) server, a second-level domain (SLD) server, and any other server in the chain of referrals, until it ultimately receives an authoritative answer to a DNS query.

This is the way that DNS resolution has been practiced for decades, and it’s also one of the reasons for the recent interest in protecting information on the resolver-to-authoritative exchange: The full domain name is more information than all but the last name server needs to know.

One such minimization technique, known as qname minimization, was identified by Verisign researchers in 2011 and documented in RFC 7816 in 2016. (In 2015, Verisign announced a royalty-free license to its qname minimization patents.)

With qname minimization, instead of sending the full domain name to each name server, the resolver sends only as much as the name server needs either to answer the query or to refer the resolver to a name server at the next level. This follows the principle of minimum disclosure: the resolver sends only as much information as the name server needs to “do its job.” As Matt Thomas described in his recent blog post on the topic, nearly half of all .com and .net queries received by Verisign’s .com TLD servers were in a minimized form as of August 2020.

Additional Minimization Techniques

Other techniques that are part of this new chapter in DNS protocol evolution include NXDOMAIN cut processing [RFC 8020] and aggressive DNSSEC caching [RFC 8198]. Both leverage information present in the DNS to reduce the amount and sensitivity of DNS information exchanged with authoritative name servers. In aggressive DNSSEC caching, for example, the resolver analyzes NSEC and NSEC3 range proofs obtained in response to previous queries to determine on its own whether a domain name doesn’t exist. This means that the resolver doesn’t always have to ask the authoritative server system about a domain name it hasn’t seen before.

All of these techniques, as well as additional minimization alternatives I haven’t mentioned, have one important common characteristic: they only change how the resolver operates during the resolver-authoritative exchange. They have no impact on the authoritative name server or on other parties during the exchange itself. They thereby mitigate disclosure risk while also minimizing operational risk.

The resolver’s exchanges with authoritative name servers, prior to minimization, were already relatively less sensitive because they represented aggregate interests of the resolver’s many clients1. Minimization techniques lower the sensitivity even further at the root and TLD levels: the resolver sends only its aggregate interests in TLDs to root servers, and only its interests in SLDs to TLD servers. The resolver still sends the aggregate interests in full domain names at the SLD level and below2, and may also include certain client-related information at these levels, such as the client-subnet extension. The lower levels therefore may have different protection objectives than the upper levels.

Conclusion

Minimization techniques and encryption together give DNS designers additional tools for protecting DNS information — tools that when deployed carefully can balance between cryptographic and operational perspectives.

These tools complement those I’ve described in previous posts in this series. Some have already been deployed at scale, such as a DNSSEC with its NSEC and NSEC3 non-existence proofs. Others are at various earlier stages, like NSEC5 and tokenized queries, and still others contemplate “post-quantum” scenarios and how to address them. (And there are yet other tools that I haven’t covered in this series, such as authenticated resolution and adaptive resolution.)

Modern cryptography is just about as old as the DNS. Both have matured since their introduction in the late 1970s and early 1980s respectively. Both bring fundamental capabilities to our connected world. Both continue to evolve to support new applications and to meet new security objectives. While they’ve often moved forward separately, as this blog series has shown, there are also opportunities for them to advance together. I look forward to sharing more insights from Verisign’s research in future blog posts.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

1. This argument obviously holds more weight for large resolvers than for small ones — and doesn’t apply for the less common case of individual clients running their own resolvers. However, small resolvers and individual clients seeking additional protection retain the option of sending sensitive queries through a large, trusted resolver, or through a privacy-enhancing proxy. The focus in our discussion is primarily on large resolvers.

2. In namespaces where domain names are registered at the SLD level, i.e., under an effective TLD, the statements in this note about “root and TLD” and “SLD level and below” should be “root through effective TLD” and “below effective TLD level.” For simplicity, I’ve placed the “zone cut” between TLD and SLD in this note.

Minimization et al. techniques and encryption together give DNS designers additional tools for protecting DNS information — tools that when deployed carefully can balance between cryptographic and operational perspectives.

The post Information Protection for the Domain Name System: Encryption and Minimization appeared first on Verisign Blog.

]]>
Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys https://blog.verisign.com/security/securing-the-dns-in-a-post-quantum-world-hash-based-signatures-and-synthesized-zone-signing-keys/ Thu, 21 Jan 2021 21:14:11 +0000 https://blog.verisign.com/?p=5741 This is the fifth in a multi-part series on cryptography and the Domain Name System (DNS). In my last article, I described efforts underway to standardize new cryptographic algorithms that are designed to be less vulnerable to potential future advances in quantum computing. I also reviewed operational challenges to be considered when adding new algorithms […]

The post Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys appeared first on Verisign Blog.

]]>

This is the fifth in a multi-part series on cryptography and the Domain Name System (DNS).

In my last article, I described efforts underway to standardize new cryptographic algorithms that are designed to be less vulnerable to potential future advances in quantum computing. I also reviewed operational challenges to be considered when adding new algorithms to the DNS Security Extensions (DNSSEC).

In this post, I’ll look at hash-based signatures, a family of post-quantum algorithms that could be a good match for DNSSEC from the perspective of infrastructure stability.

I’ll also describe Verisign Labs research into a new concept called synthesized zone signing keys that could mitigate the impact of the large signature size for hash-based signatures, while still maintaining this family’s protections against quantum computing.

(Caveat: The concepts reviewed in this post are part of Verisign’s long-term research program and do not necessarily represent Verisign’s plans or positions on new products or services. Concepts developed in our research program may be subject to U.S. and/or international patents and/or patent applications.)

A Stable Algorithm Rollover

The DNS community’s root key signing key (KSK) rollover illustrates how complicated a change to DNSSEC infrastructure can be. Although successfully accomplished, this change was delayed by ICANN to ensure that enough resolvers had the public key required to validate signatures generated with the new root KSK private key.

Now imagine the complications if the DNS community also had to ensure that enough resolvers not only had a new key but also had a brand-new algorithm.

Imagine further what might happen if a weakness in this new algorithm were to be found after it was deployed. While there are procedures for emergency key rollovers, emergency algorithm rollovers would be more complicated, and perhaps controversial as well if a clear successor algorithm were not available.

I’m not suggesting that any of the post-quantum algorithms that might be standardized by NIST will be found to have a weakness. But confidence in cryptographic algorithms can be gained and lost over many years, sometimes decades.

From the perspective of infrastructure stability, therefore, it may make sense for DNSSEC to have a backup post-quantum algorithm built in from the start — one for which cryptographers already have significant confidence and experience. This algorithm might not be as efficient as other candidates, but there is less of a chance that it would ever need to be changed. This means that the more efficient candidates could be deployed in DNSSEC with the confidence that they have a stable fallback. It’s also important to keep in mind that the prospect of quantum computing is not the only reason system developers need to be considering new algorithms from time to time. As public-key cryptography pioneer Martin Hellman wisely cautioned, new classical (non-quantum) attacks could also emerge, whether or not a quantum computer is realized.

Hash-Based Signatures

The 1970s were a foundational time for public-key cryptography, producing not only the RSA algorithm and the Diffie-Hellman algorithm (which also provided the basic model for elliptic curve cryptography), but also hash-based signatures, invented in 1979 by another public-key cryptography founder, Ralph Merkle.

Hash-based signatures are interesting because their security depends only on the security of an underlying hash function.

It turns out that hash functions, as a concept, hold up very well against quantum computing advances — much better than currently established public-key algorithms do.

This means that Merkle’s hash-based signatures, now more than 40 years old, can rightly be considered the oldest post-quantum digital signature algorithm.

If it turns out that an individual hash function doesn’t hold up — whether against a quantum computer or a classical computer — then the hash function itself can be replaced, as cryptographers have been doing for years. That will likely be easier than changing to an entirely different post-quantum algorithm, especially one that involves very different concepts.

The conceptual stability of hash-based signatures is a reason that interoperable specifications are already being developed for variants of Merkle’s original algorithm. Two approaches are described in RFC 8391, “XMSS: eXtended Merkle Signature Scheme” and RFC 8554, “Leighton-Micali Hash-Based Signatures.” Another approach, SPHINCS+, is an alternate in NIST’s post-quantum project.

Figure 1. Conventional DNSSEC signatures. DNS records are signed with the ZSK private key, and are thereby “chained” to the ZSK public key. The digital signatures may be hash-based signatures.
Figure 1. Conventional DNSSEC signatures. DNS records are signed with the ZSK private key, and are thereby “chained” to the ZSK public key. The digital signatures may be hash-based signatures.

Hash-based signatures can potentially be applied to any part of the DNSSEC trust chain. For example, in Figure 1, the DNS record sets can be signed with a zone signing key (ZSK) that employs a hash-based signature algorithm.

The main challenge with hash-based signatures is that the signature size is large, on the order of tens or even hundreds of thousands of bits. This is perhaps why they haven’t seen significant adoption in security protocols over the past four decades.

Synthesizing ZSKs with Merkle Trees

Verisign Labs has been exploring how to mitigate the size impact of hash-based signatures on DNSSEC, while still basing security on hash functions only in the interest of stable post-quantum protections.

One of the ideas we’ve come up with uses another of Merkle’s foundational contributions: Merkle trees.

Merkle trees authenticate multiple records by hashing them together in a tree structure. The records are the “leaves” of the tree. Pairs of leaves are hashed together to form a branch, then pairs of branches are hashed together to form a larger branch, and so on. The hash of the largest branches is the tree’s “root.” (This is a data-structure root, unrelated to the DNS root.)

Each individual leaf of a Merkle tree can be authenticated by retracing the “path” from the leaf to the root. The path consists of the hashes of each of the adjacent branches encountered along the way.

Authentication paths can be much shorter than typical hash-based signatures. For instance, with a tree depth of 20 and a 256-bit hash value, the authentication path for a leaf would only be 5,120 bits long, yet a single tree could authenticate more than a million leaves.

Figure 2. DNSSEC signatures following the synthesized ZSK approach proposed here. DNS records are hashed together into a Merkle tree. The root of the Merkle tree is published as the ZSK, and the authentication path through the Merkle tree is the record’s signature.
Figure 2. DNSSEC signatures following the synthesized ZSK approach proposed here. DNS records are hashed together into a Merkle tree. The root of the Merkle tree is published as the ZSK, and the authentication path through the Merkle tree is the record’s signature.

Returning to the example above, suppose that instead of signing each DNS record set with a hash-based signature, each record set were considered a leaf of a Merkle tree. Suppose further that the root of this tree were to be published as the ZSK public key (see Figure 2). The authentication path to the leaf could then serve as the record set’s signature.

The validation logic at a resolver would be the same as in ordinary DNSSEC:

  • The resolver would obtain the ZSK public key from a DNSKEY record set signed by the KSK.
  • The resolver would then validate the signature on the record set of interest with the ZSK public key.

The only difference on the resolver’s side would be that signature validation would involve retracing the authentication path to the ZSK public key, rather than a conventional signature validation operation.

The ZSK public key produced by the Merkle tree approach would be a “synthesized” public key, in that it is obtained from the records being signed. This is noteworthy from a cryptographer’s perspective, because the public key wouldn’t have a corresponding private key, yet the DNS records would still, in effect, be “signed by the ZSK!”

Additional Design Considerations

In this type of DNSSEC implementation, the Merkle tree approach only applies to the ZSK level. Hash-based signatures would still be applied at the KSK level, although their overhead would now be “amortized” across all records in the zone.

In addition, each new ZSK would need to be signed “on demand,” rather than in advance, as in current operational practice.

This leads to tradeoffs, such as how many changes to accumulate before constructing and publishing a new tree. Fewer changes and the tree will be available sooner. More changes and the tree will be larger, so the per-record overhead of the signatures at the KSK level will be lower.

Conclusion

My last few posts have discussed cryptographic techniques that could potentially be applied to the DNS in the long term — or that might not even be applied at all. In my next post, I’ll return to more conventional subjects, and explain how Verisign sees cryptography fitting into the DNS today, as well as some important non-cryptographic techniques that are part of our vision for a secure, stable and resilient DNS.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization
Research into concepts such as hash-based signatures and synthesized zone signing keys indicates that these techniques have the potential to keep the Domain Name System (DNS) secure for the long term if added into the Domain Name System Security Extensions (DNSSEC).

The post Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys appeared first on Verisign Blog.

]]>
Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon https://blog.verisign.com/security/securing-the-dns-in-a-post-quantum-world-new-dnssec-algorithms-on-the-horizon/ Tue, 19 Jan 2021 21:48:33 +0000 https://blog.verisign.com/?p=5712 This is the fourth in a multi-part series on cryptography and the Domain Name System (DNS). One of the “key” questions cryptographers have been asking for the past decade or more is what to do about the potential future development of a large-scale quantum computer. If theory holds, a quantum computer could break established public-key […]

The post Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon appeared first on Verisign Blog.

]]>

This is the fourth in a multi-part series on cryptography and the Domain Name System (DNS).

One of the “key” questions cryptographers have been asking for the past decade or more is what to do about the potential future development of a large-scale quantum computer.

If theory holds, a quantum computer could break established public-key algorithms including RSA and elliptic curve cryptography (ECC), building on Peter Shor’s groundbreaking result from 1994.

This prospect has motivated research into new so-called “post-quantum” algorithms that are less vulnerable to quantum computing advances. These algorithms, once standardized, may well be added into the Domain Name System Security Extensions (DNSSEC) — thus also adding another dimension to a cryptographer’s perspective on the DNS.

(Caveat: Once again, the concepts I’m discussing in this post are topics we’re studying in our long-term research program as we evaluate potential future applications of technology. They do not necessarily represent Verisign’s plans or position on possible new products or services.)

Post-Quantum Algorithms

The National Institute of Standards and Technology (NIST) started a Post-Quantum Cryptography project in 2016 to “specify one or more additional unclassified, publicly disclosed digital signature, public-key encryption, and key-establishment algorithms that are capable of protecting sensitive government information well into the foreseeable future, including after the advent of quantum computers.”

Security protocols that NIST is targeting for these algorithms, according to its 2019 status report (Section 2.2.1), include: “Transport Layer Security (TLS), Secure Shell (SSH), Internet Key Exchange (IKE), Internet Protocol Security (IPsec), and Domain Name System Security Extensions (DNSSEC).”

The project is now in its third round, with seven finalists, including three digital signature algorithms, and eight alternates.

NIST’s project timeline anticipates that the draft standards for the new post-quantum algorithms will be available between 2022 and 2024.

It will likely take several additional years for standards bodies such as the Internet Engineering Task (IETF) to incorporate the new algorithms into security protocols. Broad deployments of the upgraded protocols will likely take several years more.

Post-quantum algorithms can therefore be considered a long-term issue, not a near-term one. However, as with other long-term research, it’s appropriate to draw attention to factors that need to be taken into account well ahead of time.

DNSSEC Operational Considerations

The three candidate digital signature algorithms in NIST’s third round have one common characteristic: all of them have a key size or signature size (or both) that is much larger than for current algorithms.

Key and signature sizes are important operational considerations for DNSSEC because most of the DNS traffic exchanged with authoritative data servers is sent and received via the User Datagram Protocol (UDP), which has a limited response size.

Response size concerns were evident during the expansion of the root zone signing key (ZSK) from 1024-bit to 2048-bit RSA in 2016, and in the rollover of the root key signing key (KSK) in 2018. In the latter case, although the signature and key sizes didn’t change, total response size was still an issue because responses during the rollover sometimes carried as many as four keys rather than the usual two.

Thanks to careful design and implementation, response sizes during these transitions generally stayed within typical UDP limits. Equally important, response sizes also appeared to have stayed within the Maximum Transmission Unit (MTU) of most networks involved, thereby also avoiding the risk of packet fragmentation. (You can check how well your network handles various DNSSEC response sizes with this tool developed by Verisign Labs.)

Modeling Assumptions

The larger sizes associated with certain post-quantum algorithms do not appear to be a significant issue either for TLS, according to one benchmarking study, or for public-key infrastructures, according to another report. However, a recently published study of post-quantum algorithms and DNSSEC observes that “DNSSEC is particularly challenging to transition” to the new algorithms.

Verisign Labs offers the following observations about DNSSEC-related queries that may help researchers to model DNSSEC impact:

A typical resolver that implements both DNSSEC validation and qname minimization will send a combination of queries to Verisign’s root and top-level domain (TLD) servers.

Because the resolver is a validating resolver, these queries will all have the “DNSSEC OK” bit set, indicating that the resolver wants the DNSSEC signatures on the records.

The content of typical responses by Verisign’s root and TLD servers to these queries are given in Table 1 below. (In the table, <SLD>.<TLD> are the final two labels of a domain name of interest, including the TLD and the second-level domain (SLD); record types involved include A, Name Server (NS), and DNSKEY.)

Name ServerResolver Query ScenarioTypical Response Content from Verisign’s Servers
RootDNSKEY record set for root zone• DNSKEY record set including root KSK RSA-2048 public key and root ZSK RSA-2048 public key
• Root KSK RSA-2048 signature on DNSKEY record set
A or NS record set for <TLD> — when <TLD> exists • NS referral to <TLD> name server
• DS record set for <TLD> zone
• Root ZSK RSA-2048 signature on DS record set
A or NS record set for <TLD> — when <TLD> doesn’t exist• Up to two NSEC records for non-existence of <TLD>
• Root ZSK RSA-2048 signatures on NSEC records
.com / .netDNSKEY record set for <TLD> zone• DNSKEY record set including <TLD> KSK RSA-2048 public key and <TLD> ZSK RSA-1280 public key
• <TLD> KSK RSA-2048 signature on DNSKEY record set
A or NS record set for <SLD>.<TLD> — when <SLD>.<TLD> exists• NS referral to <SLD>.<TLD> name server
• DS record set for <SLD>.<TLD> zone (if <SLD>.<TLD> supports DNSSEC)
• <TLD> ZSK RSA-1280 signature on DS record set (if present)
A or NS record set for <SLD>.<TLD> — when <SLD>.<TLD> doesn’t exist• Up to three NSEC3 records for non-existence of <SLD>.<TLD>
• <TLD> ZSK RSA-1280 signatures on NSEC3 records
Table 1. Combination of queries that may be sent to Verisign’s root and TLD servers by a typical resolver that implements both DNSSEC validation and qname minimization, and content of associated responses.


For an A or NS query, the typical response, when the domain of interest exists, includes a referral to another name server. If the domain supports DNSSEC, the response also includes a set of Delegation Signer (DS) records providing the hashes of each of the referred zone’s KSKs — the next link in the DNSSEC trust chain. When the domain of interest doesn’t exist, the response includes one or more Next Secure (NSEC) or Next Secure 3 (NSEC3) records.

Researchers can estimate the effect of post-quantum algorithms on response size by replacing the sizes of the various RSA keys and signatures with those for their post-quantum counterparts. As discussed above, it is important to keep in mind that the number of keys returned may be larger during key rollovers.

Most of the queries from qname-minimizing, validating resolvers to the root and TLD name servers will be for A or NS records (the choice depends on the implementation of qname minimization, and has recently trended toward A). The signature size for a post-quantum algorithm, which affects all DNSSEC-related responses, will therefore generally have a much larger impact on average response size than will the key size, which affects only the DNSKEY responses.

Conclusion

Post-quantum algorithms are among the newest developments in cryptography. They add another dimension to a cryptographer’s perspective on the DNS because of the possibility that these algorithms, or other variants, may be added to DNSSEC in the long term.

In my next post, I’ll make the case for why the oldest post-quantum algorithm, hash-based signatures, could be a particularly good match for DNSSEC. I’ll also share the results of some research at Verisign Labs into how the large signature sizes of hash-based signatures could potentially be overcome.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization
Post-quantum algorithms are among the newest developments in cryptography. When standardized, they could eventually be added into the Domain Name System Security Extensions (DNSSEC) to help keep the DNS secure for the long term.

The post Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon appeared first on Verisign Blog.

]]>
Verisign Outreach Program Remediates Billions of Name Collision Queries https://blog.verisign.com/domain-names/verisign-outreach-program-remediates-billions-of-name-collision-queries/ Fri, 15 Jan 2021 17:33:58 +0000 https://blog.verisign.com/?p=5674 A name collision occurs when a user attempts to resolve a domain in one namespace, but it unexpectedly resolves in a different namespace. Name collision issues in the public global Domain Name System (DNS) cause billions of unnecessary and potentially unsafe DNS queries every day. A targeted outreach program that Verisign started in March 2020 […]

The post Verisign Outreach Program Remediates Billions of Name Collision Queries appeared first on Verisign Blog.

]]>

A name collision occurs when a user attempts to resolve a domain in one namespace, but it unexpectedly resolves in a different namespace. Name collision issues in the public global Domain Name System (DNS) cause billions of unnecessary and potentially unsafe DNS queries every day. A targeted outreach program that Verisign started in March 2020 has remediated one billion queries per day to the A and J root name servers, via 46 collision strings. After contacting several national internet service providers (ISPs), the outreach effort grew to include large search engines, social media companies, networking equipment manufacturers, national CERTs, security trust groups, commercial DNS providers, and financial institutions.

While this unilateral outreach effort resulted in significant and successful name collision remediation, it is broader DNS community engagement, education, and participation that offers the potential to address many of the remaining name collision problems. Verisign hopes its successes will encourage participation by other organizations in similar positions in the DNS community.

Verisign is proud to be the operator for two of the world’s 13 authoritative root servers. Being a root server operator carries with it many operational responsibilities. Ensuring the security, stability and resiliency of the DNS requires proactive efforts so that attacks against the root name servers do not disrupt DNS resolution, as well as the monitoring of DNS resolution patterns for misconfigurations, signaling telemetry, and unexpected or unintended uses that, without closer collaboration, could have unforeseen consequences (e.g. Chromium’s impact on root DNS traffic).

Monitoring may require various forms of responsible disclosure or notification to the underlying parties. Further, monitoring the root server system poses logistical challenges because any outreach and remediation programs must work at internet scale, and because root operators have no direct relationship with many of the involved entities.

Despite these challenges, Verisign has conducted several successful internet-scale outreach efforts to address various issues we have observed in the DNS.

In response to the Internet Corporation for Assigned Names and Number (ICANN) proposal to mitigate name collision risks in 2013, Verisign conducted a focused study on the collision string .CBA. Our measurement study revealed evidence of a substantial internet-connected infrastructure in Japan that relied on the non-resolution of names that end in .CBA. Verisign informed the network operator, who subsequently reconfigured some of its internal systems, resulting in an immediate decline of queries for .CBA observed at A and J root servers.

Prior to the 2018 KSK rollover, several operators of DNSSEC-validating name servers appeared to be sending out-of-date RFC 8145 signals to root name servers. To ensure the KSK rollover did not disrupt internet name resolution functions for billions of end users, Verisign augmented ICANN’s outreach effort and conducted a multi-faceted technical outreach program by contacting and working with The United States Computer Emergency Readiness Team (US-CERT) and other national CERTs, industry partners, various DNS operator groups and performing direct outreach to out-of-date signalers. The ultimate success of the KSK rollover was due in large part to outreach efforts by ICANN and Verisign.

In response to the ICANN Board’s request in resolutions 2017.11.02.29 – 2017.11.02.31, the ICANN Security and Stability Advisory Committee (SSAC) was asked to conduct studies, and to present data and points of view on collision strings, including specific advice on three higher risk strings: .CORP, .HOME and .MAIL. While Verisign is actively engaged in this Name Collision Analysis Project (NCAP) developed by SSAC, we are also reviving and expanding our 2012 name collision outreach efforts.

Verisign’s name collision outreach program is based on the guidance we provided in several recent peer-reviewed name collision publications, which highlighted various name collision vulnerabilities and examined the root causes of leaked queries and made remediation recommendations. Verisign’s program uses A and J root name server traffic data to identify high-affinity strings related to particular networks, as well as high query volume strings that are contextually associated with device manufacturers, software, or platforms. We then attempt to contact the underlying parties and assist with remediation as appropriate.

While we partially rely on direct communication channel contact information, the key enabler of our outreach efforts has been Verisign’s relationships with the broader collective DNS community. Verisign’s active participation in various industry organizations within the ICANN and DNS communities, such as M3AAWG, FIRST, DNS-OARC, APWG, NANOG, RIPE NCC, APNIC, and IETF1, enables us to identify and communicate with a broad and diverse set of constituents. In many cases, participants operate infrastructure involved in name collisions. In others, they are able to put us in direct contact with the appropriate parties.

Through a combination of DNS traffic analysis and publicly accessible data, as well as the rolodexes of various industry partnerships, across 2020 we were able to achieve effective outreach to the anonymized entities listed in Table 1.

OrganizationQueries per Day to A & JStatusNumber of Collision Strings (TLDs)Notes / Root Cause Analysis
Search Engine650MFixed1 stringApplication not using FQDNs
Telecommunications Provider250MFixedN/APrefetching bug
eCommerce Provider150MFixed25 stringsApplication not using FQDNs
Networking Manufacturer70MPending3 stringsSuffix search list
Cloud Provider64MFixed15 stringsSuffix search list
Telecommunications Provider60MFixed2 stringsRemediated through device vendor
Networking Manufacturer45MPending2 stringsSuffix search list problem in router/modem device
Financial Corporation35MFixed2 stringsTypo / misconfiguration
Social Media Company30MPending9 stringsApplication not using FQDNs
ISP20MFixed1 stringSuffix search list problem in router/modem device
Software Provider20MPending50+ stringsAcknowledged but still investigating
ISP5MPending1 stringAt time of writing, still investigating but confirmed it is a router/modem device
Table 1. Sample of outreach efforts performed by Verisign.

Many of the name collision problems encountered are the result of misconfigurations and not using fully qualified domain names. After operators deploy patches to their environments, as shown in Figure 1 below, Verisign often observes an immediate and dramatic traffic decrease at A and J root name servers. Although several networking equipment vendors and ISPs acknowledge their name collision problems, the development and deployment of firmware to a large userbase will take time.

Figure 1. Daily queries for two collision strings to A and J root servers during a nine month period of time.
Figure 1. Daily queries for two collision strings to A and J root servers during a nine month period of time.

Cumulatively, the operators who have deployed patches constitute a reduction of one billion queries per day to A and J root servers (roughly 3% of total traffic). Although root traffic is not evenly distributed among the 13 authoritative servers, we expect a similar impact at the other 11, resulting in a system-wide reduction of approximately 6.5 billion queries per day.

As the ICANN community prepares for Subsequent Procedures (the introduction of additional new TLDs) and the SSAC NCAP continues to work to answer the ICANN Board’s questions, we encourage the community to participate in our efforts to address name collisions through active outreach efforts. We believe our efforts show how outreach can have significant impact to both parties and the broader community. Verisign is committed to addressing name collision problems and will continue executing the outreach program to help minimize the attack surface exposed by name collisions and to be a responsible and hygienic root operator.

For additional information about name collisions and how to properly manage private-use TLDs, please see visit ICANN’s Name Collision Resource & Information website.


1. The Messaging, Malware and Mobile Anti-Abuse Working Group (M3AAWG), Forum of Incident Response and Security Teams (FIRST), DNS Operations, Analysis, and Research Center (DNS-OARC), Anti-Phishing Working Group (APWG), North American Network Operators’ Group (NANOG), Réseaux IP Européens Network Coordination Centre (RIPE NCC), Asia Pacific Network Information Centre (APNIC), Internet Engineering Task Force (IETF)

Learn how Verisign’s targeted outreach identifies and remediates name collision issues within the DNS.

The post Verisign Outreach Program Remediates Billions of Name Collision Queries appeared first on Verisign Blog.

]]>
Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries https://blog.verisign.com/security/newer-cryptographic-advances-for-the-domain-name-system-nsec5-and-tokenized-queries/ Thu, 14 Jan 2021 19:09:33 +0000 https://blog.verisign.com/?p=5676 This is the third in a multi-part blog series on cryptography and the Domain Name System (DNS). In my last post, I looked at what happens when a DNS query renders a “negative” response – i.e., when a domain name doesn’t exist. I then examined two cryptographic approaches to handling negative responses: NSEC and NSEC3. […]

The post Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries appeared first on Verisign Blog.

]]>

This is the third in a multi-part blog series on cryptography and the Domain Name System (DNS).

In my last post, I looked at what happens when a DNS query renders a “negative” response – i.e., when a domain name doesn’t exist. I then examined two cryptographic approaches to handling negative responses: NSEC and NSEC3. In this post, I will examine a third approach, NSEC5, and a related concept that protects client information, tokenized queries.

The concepts I discuss below are topics we’ve studied in our long-term research program as we evaluate new technologies. They do not necessarily represent Verisign’s plans or position on a new product or service. Concepts developed in our research program may be subject to U.S. and international patents and patent applications.

NSEC5

NSEC5 is a result of research by cryptographers at Boston University and the Weizmann Institute. In this approach, which is still in an experimental stage, the endpoints are the outputs of a verifiable random function (VRF), a cryptographic primitive that has been gaining interest in recent years. NSEC5 is documented in an Internet Draft (currently expired) and in several research papers.

A VRF is like a hash function but with two important differences:

  1. In addition to a message input, a VRF has a second input, a private key. (As in public-key cryptography, there’s also a corresponding public key.) No one can compute the outputs without the private key, hence the “random.”
  2. A VRF has two outputs: a token and a proof. (I’ve adopted the term “token” for alignment with the research that I describe next. NSEC5 itself simply uses “hash.”) Anyone can check that the token is correct given the proof and the public key, hence the “verifiable.”

So, it’s not only hard for an adversary to reverse the VRF – which is also a property the hash function has – but it’s also hard for the adversary to compute the VRF in the forward direction, thus preventing dictionary attacks. And yet a relying party can still confirm that the VRF output for a given input is correct, because of the proof.

How does this work in practice? As in NSEC and NSEC3, range statements are prepared in advance and signed with the zone signing key (ZSK). With NSEC5, however, the range endpoints are two consecutive tokens.

When a domain name doesn’t exist, the name server applies the VRF to the domain name to obtain a token and a proof. The name sever then returns a range statement where the token falls within the range, as well as the proof, as shown in the figure below. Note that the token values are for illustration only.

Figure 1. An example of a NSEC5 proof of non-existence based on a verifiable random function.
Figure 1. An example of a NSEC5 proof of non-existence based on a verifiable random function.

Because the range statement reveals only tokenized versions of other domain names in a zone, an adversary who doesn’t know the private key doesn’t learn any new existing domain names from the response. Indeed, to find out which domain name corresponds to one of the tokenized endpoints, the adversary would need access to the VRF itself to see if a candidate domain name has a matching hash value, which would involve an online dictionary attack. This significantly reduces disclosure risk.

The name server needs a copy of the zone’s NSEC5 private key so that it can generate proofs for non-existent domain names. The ZSK itself can stay in the provisioning system. As the designers of NSEC5 have pointed out, if the NSEC5 private key does happen to be compromised, this only makes it possible to do a dictionary attack offline— not to generate signatures on new range statements, or on new positive responses.

NSEC5 is interesting from a cryptographer’s perspective because it uses a less common cryptographic technique, a VRF, to achieve a design goal that was at best partially met by previous approaches. As with other new technologies, DNS operators will need to consider whether NSEC5’s benefits are sufficient to justify its cost and complexity. Verisign doesn’t have any plans to implement NSEC5, as we consider NSEC and NSEC3 adequate for the name servers we currently operate. However, we will continue to track NSEC5 and related developments as part of our long-term research program.

Tokenized Queries

A few years before NSEC5 was published, Verisign Labs had started some research on an opposite application of tokenization to the DNS, to protect a client’s information from disclosure.

In our approach, instead of asking the resolver “What is <name>’s IP address,” the client would ask “What is token 3141…’s IP address,” where 3141… is the tokenization of <name>.

(More precisely, the client would specify both the token and the parent zone that the token relates to, e.g., the TLD of the domain name. Only the portion of the domain name below the parent would be obscured, just as in NSEC5. I’ve omitted the zone information for simplicity in this discussion.)

Suppose now that the domain name corresponding to token 3141… does exist. Then the resolver would respond with the domain name’s IP address as usual, as shown in the next figure.

Figure 2. Tokenized queries
Figure 2. Tokenized queries.

In this case, the resolver would know that the domain name associated with the token does exist, because it would have a mapping between the token and the DNS record, i.e., the IP address. Thus, the resolver would effectively “know” the domain name as well for practical purposes. (We’ve developed another approach that can protect both the domain name and the DNS record from disclosure to the resolver in this case, but that’s perhaps a topic for another post.)

Now, consider a domain name that doesn’t exist and suppose that its token is 2718… .

In this case, the resolver would respond that the domain name doesn’t exist, as usual, as shown below.

Figure 3. Non-existence with tokenized queries
Figure 3. Non-existence with tokenized queries.

But because the domain name is tokenized and no other information about the domain name is returned, the resolver would only learn the token 2718… (and the parent zone), not the actual domain name that the client is interested in.

The resolver could potentially know that the name doesn’t exist via a range statement from the parent zone, as in NSEC5.

How does the client tokenize the domain name, if it doesn’t have the private key for the VRF? The name server would offer a public interface to the tokenization function. This can be done in what cryptographers call an “oblivious” VRF protocol, where the name server doesn’t see the actual domain name during the protocol, yet the client still gets the token.

To keep the resolver itself from using this interface to do an online dictionary attack that matches candidate domain names with tokens, the name server could rate-limit access, or restrict it only to authorized requesters.

Additional details on this technology may be found in U.S. Patent 9,202,079B2, entitled “Privacy preserving data querying,” and related patents.

It’s interesting from a cryptographer’s perspective that there’s a way for a client to find out whether a DNS record exists, without necessarily revealing the domain name of interest. However, as before, the benefits of this new technology will be weighed against its operational cost and complexity and compared to other approaches. Because this technique focuses on client-to-resolver interactions, it’s already one step removed from the name servers that Verisign currently operates, so it is not as relevant to our business today in a way it might have been when we started the research. This one will stay under our long-term tracking as well.

Conclusion

The examples I’ve shared in these last two blog posts make it clear that cryptography has the potential to bring interesting new capabilities to the DNS. While the particular examples I’ve shared here do not meet the criteria for our product roadmap, researching advances in cryptography and other techniques remains important because new events can sometimes change the calculus. That point will become even more evident in my next post, where I’ll consider the kinds of cryptography that may be needed in the event that one or more of today’s algorithms is compromised, possibly through the introduction of a quantum computer.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

The post Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries appeared first on Verisign Blog.

]]>
Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3 https://blog.verisign.com/security/cryptographic-tools-for-non-existence-in-the-domain-name-system-nsec-and-nsec3/ Wed, 13 Jan 2021 18:06:22 +0000 https://blog.verisign.com/?p=5661 This is the second in a multi-part blog series on cryptography and the Domain Name System (DNS). In my previous post, I described the first broad scale deployment of cryptography in the DNS, known as the Domain Name System Security Extensions (DNSSEC). I described how a name server can enable a requester to validate the […]

The post Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3 appeared first on Verisign Blog.

]]>

This is the second in a multi-part blog series on cryptography and the Domain Name System (DNS).

In my previous post, I described the first broad scale deployment of cryptography in the DNS, known as the Domain Name System Security Extensions (DNSSEC). I described how a name server can enable a requester to validate the correctness of a “positive” response to a query — when a queried domain name exists — by adding a digital signature to the DNS response returned.

The designers of DNSSEC, as well as academic researchers, have separately considered the answer of “negative” responses – when the domain name doesn’t exist. In this case, as I’ll explain, responding with a signed “does not exist” is not the best design. This makes the non-existence case interesting from a cryptographer’s perspective as well.

Initial Attempts

Consider a domain name like example.arpa that doesn’t exist.

If it did exist, then as I described in my previous post, the second-level domain (SLD) server for example.arpa would return a response signed by example.arpa’s zone signing key (ZSK).

So a first try for the case that the domain name doesn’t exist is for the SLD server to return the response “example.arpa doesn’t exist,” signed by example.arpa’s ZSK.

However, if example.arpa doesn’t exist, then example.arpa won’t have either an SLD server or a ZSK to sign with. So, this approach won’t work.

A second try is for the parent name server — the .arpa top-level domain (TLD) server in the example — to return the response “example.arpa doesn’t exist,” signed by the parent’s ZSK.

This could work if the .arpa DNS server knows the ZSK for .arpa. However, for security and performance reasons, the design preference for DNSSEC has been to keep private keys offline, within the zone’s provisioning system.

The provisioning system can precompute statements about domain names that do exist — but not about every possible individual domain name that doesn’t exist. So, this won’t work either, at least not for the servers that keep their private keys offline.

The third try is the design that DNSSEC settled on. The parent name server returns a “range statement,” previously signed with the ZSK, that states that there are no domain names in an ordered sequence between two “endpoints” where the endpoints depend on domain names that do exist. The range statements can therefore be signed offline, and yet the name server can still choose an appropriate signed response to return, based on the (non-existent) domain name in the query.

The DNS community has considered several approaches to constructing range statements, and they have varying cryptographic properties. Below I’ve described two such approaches. For simplicity, I’ve focused just on the basics in the discussion that follows. The astute reader will recognize that there are many more details involved both in the specification and the implementation of these techniques.

NSEC

The first approach, called NSEC, involved no additional cryptography beyond the DNSSEC signature on the range statement. In NSEC, the endpoints are actual domain names that exist. NSEC stands for “Next Secure,” referring to the fact that the second endpoint in the range is the “next” existing domain name following the first endpoint.

The NSEC resource record is documented in one of the original DNSSEC specifications, RFC4033, which was co-authored by Verisign.

The .arpa zone implements NSEC. When the .arpa server receives the request “What is the IP address of example.arpa,” it returns the response “There are no names between e164.arpa and home.arpa.” This exchange is shown in the figure below and is analyzed in the associated DNSviz graph. (The response is accurate as of the writing of this post; it could be different in the future if names were added to or removed from the .arpa zone.)

NSEC has a side effect: responses immediately reveal unqueried domain names in the zone. Depending on the sensitivity of the zone, this may be undesirable from the perspective of the minimum disclosure principle.

Figure 1. An example of a NSEC proof of non-existence (as of the writing of this post)
Figure 1. An example of a NSEC proof of non-existence (as of the writing of this post).

NSEC3

A second approach, called NSEC3 reduces the disclosure risk somewhat by defining the endpoints as hashes of existing domain names. (NSEC3 is documented in RFC 5155, which was also co-authored by Verisign.)

An example of NSEC3 can be seen with example.name, another domain that doesn’t exist. Here, the .name TLD server returns a range statement that “There are no domain names with hashes between 5SU9… and 5T48…”. Because the hash of example.name is “5SVV…” the response implies that “example.name” doesn’t exist.

This statement is shown in the figure below and in another DNSviz graph. (As above, the actual response could change if the .name zone changes.)

Figure 2. An example of a NSEC3 proof of non-existence based on a hash function (as of the writing of this post)
Figure 2. An example of a NSEC3 proof of non-existence based on a hash function (as of the writing of this post).

To find out which domain name corresponds to one of the hashed endpoints, an adversary would have to do a trial-and-error or “dictionary” attack across multiple guesses of domain names, to see if any has a matching hash value. Such a search could be performed “offline,” i.e., without further interaction with the name server, which is why the disclosure risk is only somewhat reduced.

NSEC and NSEC3 are mutually exclusive. Nearly all TLDs, including all TLDs operated by Verisign, implement NSEC3. In addition to .arpa, the root zone also implements NSEC.

In my next post, I’ll describe NSEC5, an approach still in the experimental stage, that replaces the hash function in NSEC3 with a verifiable random function (VRF) to protect against offline dictionary attacks. I’ll also share some research Verisign Labs has done on a complementary approach that helps protect a client’s queries for non-existent domain names from disclosure.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

The post Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3 appeared first on Verisign Blog.

]]>
The Domain Name System: A Cryptographer’s Perspective https://blog.verisign.com/security/the-domain-name-system-a-cryptographers-perspective/ Fri, 08 Jan 2021 15:59:51 +0000 https://blog.verisign.com/?p=5625 Man looking at technical imageryThis is the first in a multi-part blog series on cryptography and the Domain Name System (DNS). As one of the earliest protocols in the internet, the DNS emerged in an era in which today’s global network was still an experiment. Security was not a primary consideration then, and the design of the DNS, like […]

The post The Domain Name System: A Cryptographer’s Perspective appeared first on Verisign Blog.

]]>

This is the first in a multi-part blog series on cryptography and the Domain Name System (DNS).

As one of the earliest protocols in the internet, the DNS emerged in an era in which today’s global network was still an experiment. Security was not a primary consideration then, and the design of the DNS, like other parts of the internet of the day, did not have cryptography built in.

Today, cryptography is part of almost every protocol, including the DNS. And from a cryptographer’s perspective, as I described in my talk at last year’s International Cryptographic Module Conference (ICMC20), there’s so much more to the story than just encryption.

Where It All Began: DNSSEC

The first broad-scale deployment of cryptography in the DNS was not for confidentiality but for data integrity, through the Domain Name System Security Extensions (DNSSEC), introduced in 2005.

The story begins with the usual occurrence that happens millions of times a second around the world: a client asks a DNS resolver a query like “What is example.com’s Internet Protocol (IP) address?” The resolver in this case answers: “example.com’s IP address is 93.184.216.34”. (This is the correct answer.)

If the resolver doesn’t already know the answer to the request, then the process to find the answer goes something like this:

  • With qname minimization, when the resolver receives this request, it starts by asking a related question to one of the DNS’s 13 root servers, such as the A and J root servers operated by Verisign: “Where is the name server for the .com top-level domain (TLD)?”
  • The root server refers the resolver to the .com TLD server.
  • The resolver asks the TLD server, “Where is the name server for the example.com second-level domain (SLD)?”
  • The TLD server then refers the resolver to the example.com server.
  • Finally, the resolver asks the SLD server, “What is example.com’s IP address?” and receives an answer: “93.184.216.34”.

Digital Signatures

But how does the resolver know that the answer it ultimately receives is correct? The process defined by DNSSEC follows the same “delegation” model from root to TLD to SLD as I’ve described above.

Indeed, DNSSEC provides a way for the resolver to check that the answer is correct by validating a chain of digital signatures, by examining digital signatures at each level of the DNS hierarchy (or technically, at each “zone” in the delegation process). These digital signatures are generated using public key cryptography, a well-understood process that involves encryption using key pairs, one public and one private.

In a typical DNSSEC deployment, there are two active public keys per zone: a Key Signing Key (KSK) public key and a Zone Signing Key (ZSK) public key. (The reason for having two keys is so that one key can be changed locally, without the other key being changed.)

The responses returned to the resolver include digital signatures generated by either the corresponding KSK private key or the corresponding ZSK private key.

Using mathematical operations, the resolver checks all the digital signatures it receives in association with a given query. If they are valid, the resolver returns the “Digital Signature Validated” indicator to the client that initiated the query.

Trust Chains

Figure 1 A Simplified View of the DNSSEC Chain.
Figure 1: A Simplified View of the DNSSEC Chain.

A convenient way to visualize the collection of digital signatures is as a “trust chain” from a “trust anchor” to the DNS record of interest, as shown in the figure above. The chain includes “chain links” at each level of the DNS hierarchy. Here’s how the “chain links” work:

The root KSK public key is the “trust anchor.” This key is widely distributed in resolvers so that they can independently authenticate digital signatures on records in the root zone, and thus authenticate everything else in the chain.

The root zone chain links consist of three parts:

  1. The root KSK public key is published as a DNS record in the root zone. It must match the trust anchor.
  2. The root ZSK public key is also published as a DNS record. It is signed by the root KSK private key, thus linking the two keys together.
  3. The hash of the TLD KSK public key is published as a DNS record. It is signed by the root ZSK private key, further extending the chain.

The TLD zone chain links also consist of three parts:

  1. The TLD KSK public key is published as a DNS record; its hash must match the hash published in the root zone.
  2. The TLD ZSK public key is published as a DNS record, which is signed by the TLD KSK private key.
  3. The hash of the SLD KSK public key is published as a DNS record. It is signed by the TLD ZSK private key.

The SLD zone chain links once more consist of three parts:

  1. The SLD KSK public key is published as a DNS record. Its hash, as expected, must match the hash published in the TLD zone.
  2. The SLD ZSK public key is published as a DNS record signed by the SLD KSK private key.
  3. A set of DNS records – the ultimate response to the query – is signed by the SLD ZSK private key.

A resolver (or anyone else) can thereby verify the signature on any set of DNS records given the chain of public keys leading up to the trust anchor.

Note that this is a simplified view, and there are other details in practice. For instance, the various KSK public keys are also signed by their own private KSK, but I’ve omitted these signatures for brevity. The DNSViz tool provides a very nice interactive interface for browsing DNSSEC trust chains in detail, including the trust chain for example.com discussed here.

Updating the Root KSK Public Key

The effort to update the root KSK public key, the aforementioned “trust anchor” was one of the challenging and successful projects by the DNS community over the past couple of years. This initiative – the so-called “root KSK rollover” – was challenging because there was no easy way to determine whether resolvers actually had been updated to use the latest root KSK — remember that cryptography and security was added on rather than built into the DNS. There are many resolvers that needed to be updated, each independently managed.

The research paper “Roll, Roll, Roll your Root: A Comprehensive Analysis of the First Ever DNSSEC Root KSK Rollover” details the process of updating the root KSK. The paper, co-authored by Verisign researchers and external colleagues, received the distinguished paper award at the 2019 Internet Measurement Conference.

Final Thoughts

I’ve focused here on how a resolver validates correctness when the response to a query has a “positive” answer — i.e., when the DNS record exists. Checking correctness when the answer doesn’t exist gets even more interesting from a cryptographer’s perspective. I’ll cover this topic in my next post.

Read the complete six blog series:

  1. The Domain Name System: A Cryptographer’s Perspective
  2. Cryptographic Tools for Non-Existence in the Domain Name System: NSEC and NSEC3
  3. Newer Cryptographic Advances for the Domain Name System: NSEC5 and Tokenized Queries
  4. Securing the DNS in a Post-Quantum World: New DNSSEC Algorithms on the Horizon
  5. Securing the DNS in a Post-Quantum World: Hash-Based Signatures and Synthesized Zone Signing Keys
  6. Information Protection for the Domain Name System: Encryption and Minimization

The post The Domain Name System: A Cryptographer’s Perspective appeared first on Verisign Blog.

]]>
Chromium’s Reduction of Root DNS Traffic https://blog.verisign.com/domain-names/chromiums-reduction-of-root-dns-traffic/ Thu, 07 Jan 2021 22:03:10 +0000 https://blog.verisign.com/?p=5611 Search BarAs we begin a new year, it is important to look back and reflect on our accomplishments and how we can continue to improve. A significant positive the DNS community could appreciate from 2020 is the receptiveness and responsiveness of the Chromium team to address the large amount of DNS queries being sent to the […]

The post Chromium’s Reduction of Root DNS Traffic appeared first on Verisign Blog.

]]>

As we begin a new year, it is important to look back and reflect on our accomplishments and how we can continue to improve. A significant positive the DNS community could appreciate from 2020 is the receptiveness and responsiveness of the Chromium team to address the large amount of DNS queries being sent to the root server system.

In a previous blog post, we quantified that upwards of 45.80% of total DNS traffic to the root servers was, at the time, the result of Chromium intranet redirection detection tests. Since then, the Chromium team has redesigned its code to disable the redirection test on Android systems and introduced a multi-state DNS interception policy that supports disabling the redirection test for desktop browsers. This functionality was released mid-November of 2020 for Android systems in Chromium 87 and, quickly thereafter, the root servers experienced a rapid decline of DNS queries.

The figure below highlights the significant decline of query volume to the root server system immediately after the Chromium 87 release. Prior to the software release, the root server system saw peaks of ~143 billion queries per day. Traffic volumes have since decreased to ~84 billion queries a day. This represents more than a 41% reduction of total query volume.

Note: Some data from root operators was not available at the time of this publication.

This type of broad root system measurement is facilitated by ICANN’s Root Server System Advisory Committee standards document RSSAC002, which establishes a set of baseline metrics for the root server system. These root server metrics are readily available to the public for analysis, monitoring, and research. These metrics represent another milestone the DNS community could appreciate and continue to support and refine going forward.

Rightly noted in ICANN’s Root Name Service Strategy and Implementation publication, the root server system currently “faces growing volumes of traffic” from legitimate users but also from misconfigurations, misuse, and malicious actors and that “the costs incurred by the operators of the root server system continue to climb to mitigate these attacks using the traditional approach”.

As we reflect on how Chromium’s large impact to root server traffic was identified and then resolved, we as a DNS community could consider how outreach and engagement should be incorporated into a traditional approach of addressing DNS security, stability, and resiliency. All too often, technologists solve problems by introducing additional layers of technology abstractions and disregarding simpler solutions, such as outreach and engagement.

We believe our efforts show how such outreach and community engagement can have significant impact both to the parties directly involved, and to the broader community. Chromium’s actions will directly aide and ease the operational costs to mitigate attacks at the root. Reducing the root server system load by 41%, with potential further reduction depending on future Chromium deployment decisions, will lighten operational costs incurred to mitigate attacks by relinquishing their computing and network resources.

In pursuit of maintaining a responsible and common-sense root hygiene regimen, Verisign will continue to analyze root telemetry data and engage with entities such as Chromium to highlight operational concerns, just as Verisign has done in the past to address name collisions problems. We’ll be sharing more information on this shortly.

This piece was co-authored by Matt Thomas and Duane Wessels, Distinguished Engineers at Verisign.

The post Chromium’s Reduction of Root DNS Traffic appeared first on Verisign Blog.

]]>
Meeting the Evolving Challenges of COVID-19 https://blog.verisign.com/domain-names/meeting-the-evolving-challenges-of-covid-19/ Wed, 23 Dec 2020 19:39:31 +0000 https://blog.verisign.com/?p=5603 Verisign LogoThe COVID-19 pandemic, when it struck earlier this year, ushered in an immediate period of adjustment for all of us. And just as the challenges posed by COVID-19 in 2020 have been truly unprecedented, Verisign’s mission – enabling the world to connect online with reliability and confidence, anytime, anywhere – has never been more relevant. […]

The post Meeting the Evolving Challenges of COVID-19 appeared first on Verisign Blog.

]]>

The COVID-19 pandemic, when it struck earlier this year, ushered in an immediate period of adjustment for all of us. And just as the challenges posed by COVID-19 in 2020 have been truly unprecedented, Verisign’s mission – enabling the world to connect online with reliability and confidence, anytime, anywhere – has never been more relevant. We are grateful for the continued dedication of our workforce, which enables us to provide the building blocks people need for remote working and learning, and simply for keeping in contact with each other.

At Verisign we took early action to adopt a COVID-19 work posture to protect our people, their families, and our operations. This involved the majority of our employees working from home, and implementing new cleaning and health safety protocols to protect those employees and contractors for whom on-site presence was essential to maintain key functions.

Our steps to address the pandemic did not stop there. On March 25 we announced a series of measures to help the communities where we live and work, and the broader DNS community in which we operate. This included, under our Verisign Cares program, making contributions to organizations supporting key workers, first responders and medical personnel, and doubling the company’s matching program for employee giving so that employee donations to support the COVID-19 response could have a greater impact.

Today, while vaccines may offer signs of long term hope, the pandemic has plunged many families into economic hardship and has had a dramatic effect on food insecurity in the U.S., with an estimated 50 million people affected. With this hardship in mind, we have this week made contributions totaling $275,000 to food banks in the areas where we have our most substantial footprint: the Washington DC-Maryland-Virginia region; Delaware; and the canton of Fribourg, in Switzerland. This will help local families put food on their tables during what will be a difficult winter for many.

The pandemic has also had a disproportionate, and potentially permanent, impact on certain sectors of the economy. So today Verisign is embarking on a partnership with Virginia Ready, which helps people affected by COVID-19 access training and certification for in-demand jobs in sectors such as technology. We are making an initial contribution of $250,000 to Virginia Ready, and will look to establish further partnerships of this kind across the country in 2021.

As people around the world gather online to address the global challenges posed by COVID-19, we want to share some of the steps we have taken so far to support the communities we serve, while keeping our critical internet infrastructure running smoothly.

The post Meeting the Evolving Challenges of COVID-19 appeared first on Verisign Blog.

]]>
A Balanced DNS Information Protection Strategy: Minimize at Root and TLD, Encrypt When Needed Elsewhere https://blog.verisign.com/security/a-balanced-dns-information-protection-strategy-minimize-at-root-and-tld-encrypt-when-needed-elsewhere/ Mon, 07 Dec 2020 22:20:01 +0000 https://blog.verisign.com/?p=5544 Lock image and DNSOver the past several years, questions about how to protect information exchanged in the Domain Name System (DNS) have come to the forefront. One of these questions was posed first to DNS resolver operators in the middle of the last decade, and is now being brought to authoritative name server operators: “to encrypt or not […]

The post A Balanced DNS Information Protection Strategy: Minimize at Root and TLD, Encrypt When Needed Elsewhere appeared first on Verisign Blog.

]]>

Over the past several years, questions about how to protect information exchanged in the Domain Name System (DNS) have come to the forefront.

One of these questions was posed first to DNS resolver operators in the middle of the last decade, and is now being brought to authoritative name server operators: “to encrypt or not to encrypt?” It’s a question that Verisign has been considering for some time as part of our commitment to security, stability and resiliency of our DNS operations and the surrounding DNS ecosystem.

Because authoritative name servers operate at different levels of the DNS hierarchy, the answer is not a simple “yes” or “no.” As I will discuss in the sections that follow, different information protection techniques fit the different levels, based on a balance between cryptographic and operational considerations.

How to Protect, Not Whether to Encrypt

Rather than asking whether to deploy encryption for a particular exchange, we believe it is more important to ask how best to address the information protection objectives for that exchange — whether by encryption, alternative techniques or some combination thereof.

Information protection must balance three objectives:

  • Confidentiality: protecting information from disclosure;
  • Integrity: protecting information from modification; and
  • Availability: protecting information from disruption.

The importance of balancing these objectives is well-illustrated in the case of encryption. Encryption can improve confidentiality and integrity by making it harder for an adversary to view or change data, but encryption can also impair availability, by making it easier for an adversary to cause other participants to expend unnecessary resources. This is especially the case in DNS encryption protocols where the resource burden for setting up an encrypted session rests primarily on the server, not the client.

The Internet-Draft “Authoritative DNS-Over-TLS Operational Considerations,” co-authored by Verisign, expands on this point:

Initial deployments of [Authoritative DNS over TLS] may offer an immediate expansion of the attack surface (additional port, transport protocol, and computationally expensive crypto operations for an attacker to exploit) while, in some cases, providing limited protection to end users.

Complexity is also a concern because DNS encryption requires changes on both sides of an exchange. The long-installed base of DNS implementations, as Bert Hubert has parabolically observed, can only sustain so many more changes before one becomes the “straw that breaks the back” of the DNS camel.

The flowchart in Figure 1 describes a two-stage process for factoring in operational risk when determining how to mitigate the risk of disclosure of sensitive information in a DNS exchange. The process involves two questions.

Figure 1  Two-stage process for factoring in operational risk when determining how to address information protection objectives for a DNS exchange.
Figure 1 Two-stage process for factoring in operational risk when determining how to address information protection objectives for a DNS exchange.

This process can readily be applied to develop guidance for each exchange in the DNS resolution ecosystem.

Applying Guidance

Three main exchanges in the DNS resolution ecosystem are considered here, as shown in Figure 2: resolver-to-authoritative at the root and TLD level; resolver-to-authoritative below these levels; and client-to-resolver.

Figure 2 Three DNS resolution exchanges: (1) resolver-to-authoritative at the root and TLD levels; (2) resolver-to-authoritative at the SLD level (and below); (3) client-to-resolver. Different information protection guidance applies for each exchange. The exchanges are shown with qname minimization implemented at the root and TLD levels.

1. Resolver-to-Root and Resolver-to-TLD

The resolver-to-authoritative exchange at the root level enables DNS resolution for all underlying domain names; the exchange at the TLD level does the same for all names under a TLD. These exchanges provide global navigation for all names, benefiting all resolvers and therefore all clients, and making the availability objective paramount.

As a resolver generally services many clients, information exchanged at these levels represents aggregate interests in domain names, not the direct interests of specific clients. The sensitivity of this aggregated information is therefore relatively low to start with, as it does not contain client-specific details beyond the queried names and record types. However, the full domain name of interest to a client has conventionally been sent to servers at the root and TLD levels, even though this is more information than they need to know to refer the resolver to authoritative name servers at lower levels of the DNS hierarchy. The additional detail in the full domain name may be considered sensitive in some cases. Therefore, this data exchange merits consideration for protection.

The decision flow (as generally described in Figure 1) for this exchange is as follows:

  1. Are there alternatives with lower operational risk that can mitigate the disclosure risk? Yes. Techniques such as qname minimization, NXDOMAIN cut processing and aggressive DNSSEC caching — which we collectively refer to as minimization techniques — can reduce the information exchanged to just the resolver’s aggregate interests in TLDs and SLDs, removing the further detail of the full domain names and meeting the principle of minimum disclosure.
  2. Does the benefit of encryption in mitigating any residual disclosure risk justify its operational risk? No. The information exchanged is just the resolver’s aggregate interests in TLDs and SLDs after minimization techniques are applied, and any further protection offered by encryption wouldn’t justify the operational risk of a protocol change affecting both sides of the exchange. While some resolvers and name servers may be open to the operational risk, it shouldn’t be required of others.

Interestingly, while minimization itself is sufficient to address the disclosure risk at the root and TLD levels, encryption alone isn’t. Encryption protects against disclosure to outside observers but not against disclosure to (or by) the name server itself. Even though the sensitivity of the information is relatively low for reasons noted above, without minimization, it’s still more than the name server needs to know.

Summary: Resolvers should apply minimization techniques at the root and TLD levels. Resolvers and root and TLD servers should not be required to implement DNS encryption on these exchanges.

Note that at this time, Verisign has no plans to implement DNS encryption at the root or TLD servers that the company operates. Given the availability of qname minimization, which we are encouraging resolver operators to implement, and other minimization techniques, we do not currently see DNS encryption at these levels as offering an appropriate risk / benefit tradeoff.

2. Resolver-to-SLD and Below

The resolver-to-authoritative exchanges at the SLD level and below enable DNS resolution within specific namespaces. These exchanges provide local optimization, benefiting all resolvers and all clients interacting with the included namespaces.

The information exchanged at these levels also represents the resolver’s aggregate interests, but in some cases, it may also include client-related information such as the client’s subnet. The full domain names and the client-related information, if any, are the most sensitive parts.

The decision flow for this exchange is as follows:

  1. Are there alternatives with lower operational risk that can mitigate the disclosure risk? Partially. Minimization techniques may apply to some exchanges at the SLD level and below if the full domain names have more than three labels. However, they don’t help as much with the lowest-level authoritative name server involved, because that name server needs the full domain name.
  2. Does the benefit of encryption in mitigating any residual disclosure risk justify its operational risk? In some cases. If the exchange includes client-specific information, then the risk may be justified. If full domain names include labels beyond the TLD and SLD that are considered more sensitive, this might be another justification. Otherwise, the benefit of encryption generally wouldn’t justify the operational risk, and for this reason, as in the previous case, encryption shouldn’t be required, except in the specific purposes noted.

Summary: Resolvers and SLD servers (and below) should implement DNS encryption on their exchanges if they are sending sensitive full domain names or client-specific information. Otherwise, they should not be required to implement DNS encryption.

3. Client-to-Resolver

The client-to-resolver exchange enables navigation to all domain names for all clients of the resolver.

The information exchanged here represents the interests of each specific client. The sensitivity of this information is therefore relatively high, making confidentiality vital.

The decision process in this case traverses the first and second steps:

  1. Are there alternatives with lower operational risk that can mitigate the disclosure risk? Not really. Minimization techniques don’t apply here because the resolver needs the full domain name and the client-specific information included in the request, namely the client’s IP address. Other alternatives, such as oblivious DNS, have been proposed, but are more complex and arguably increase operational risk. Note that some clients use security and confidentiality solutions at different layers of the protocol stack (e.g. a virtual private network (VPN), etc.) to protect DNS exchanges.
  2. Does the benefit of encryption in mitigating any residual disclosure risk justify its operational risk? Yes.

Summary: Clients and resolvers should implement DNS encryption on this exchange, unless the exchange is otherwise adequately protected, for instance as part of the network connection provided by an enterprise or internet service provider.

Summary of Guidance

The following table summarizes the guidance for how to protect the various exchanges in the DNS resolution ecosystem.

In short, the guidance is “minimize at root and TLD, encrypt when needed elsewhere.”

DNS ExchangePurposeGuidance
Resolver-to-Root and TLDGlobal navigational availability• Resolvers should apply minimization techniques
• Resolvers and root and TLD servers should not be required to implement DNS encryption on these exchanges
Resolver-to-SLD and BelowLocal performance optimization• Resolvers and SLD servers (and below) should implement DNS encryption on their exchanges if they are sending sensitive full domain names, or client-specific information
• Resolvers and SLD servers (and below) should not be required to implement DNS encryption otherwise
Client-to-ResolverGeneral DNS resolution• Clients and resolvers should implement DNS encryption on this exchange, unless adequate protection is otherwise provided, e.g., as part of a network connection

Conclusion

If the guidance suggested here were followed, we could expect to see more deployment of minimization techniques on resolver-to-authoritative exchanges at the root and TLD levels; more deployment of DNS encryption, when needed, at the SLD levels and lower; and more deployment of DNS encryption on client-to-resolver exchanges.

In all these deployments, the DNS will serve the same purpose as it already does in today’s unencrypted exchanges: enabling general-purpose navigation to information and resources on the internet.

DNS encryption also brings two new capabilities that make it possible for the DNS to serve two new purposes. Both are based on concepts developed in Verisign’s research program.

  1. The client can authenticate itself to a resolver or name server that supports DNS encryption, and the resolver or name server can limit its DNS responses based on client identity. This capability enables a new set of security controls for restricting access to network resources and information. We call this approach authenticated resolution.
  2. The client can confidentially send client-related information to the resolver or name server, and the resolver or name server can optimize its DNS responses based on client characteristics. This capability enables a new set of performance features for speeding access to those same resources and information. We call this approach adaptive resolution.

You can read more about these two applications in my recent blog post on this topic, Authenticated Resolution and Adaptive Resolution: Security and Navigational Enhancements to the Domain Name System.

Verisign CTO Burt Kaliski explains why minimization techniques at the root and TLD levels of the DNS are a key component of a balanced DNS information protection strategy.

The post A Balanced DNS Information Protection Strategy: Minimize at Root and TLD, Encrypt When Needed Elsewhere appeared first on Verisign Blog.

]]>
Cybersecurity Considerations in the Work-From-Home Era https://blog.verisign.com/security/cybersecurity-considerations-in-the-work-from-home-era/ Thu, 03 Dec 2020 16:00:00 +0000 https://blog.verisign.com/?p=5529 Cyberthreat keywordsNote: This article originally appeared in Verisign’s Q3 2020 Domain Name Industry Brief. Verisign is deeply committed to protecting our critical internet infrastructure from potential cybersecurity threats, and to keeping up to date on the changing cyber landscape.  Over the years, cybercriminals have grown more sophisticated, adapting to changing business practices and diversifying their approaches […]

The post Cybersecurity Considerations in the Work-From-Home Era appeared first on Verisign Blog.

]]>

Note: This article originally appeared in Verisign’s Q3 2020 Domain Name Industry Brief.

Verisign is deeply committed to protecting our critical internet infrastructure from potential cybersecurity threats, and to keeping up to date on the changing cyber landscape. 

Over the years, cybercriminals have grown more sophisticated, adapting to changing business practices and diversifying their approaches in non-traditional ways. We have seen security threats continue to evolve in 2020, as many businesses have shifted to a work from home posture due to the COVID-19 pandemic. For example, the phenomenon of “Zoom-bombing” video meetings and online learning sessions had not been a widespread issue until, suddenly, it became one. 

As more people began accessing company applications and files over their home networks, IT departments implemented new tools and set new policies to find the right balance between protecting company assets and sensitive information, and enabling employees to be just as productive at home as they would be in the office. Even the exponential jump in the use of home-networked printers that might or might not be properly secured represented a new security consideration for some corporate IT teams. 

An increase in phishing scams accompanied this shift in working patterns. About a month after much of the global workforce began working from home in greater numbers, the Federal Bureau of Investigation (FBI) reported about a 300 percent to 400 percent spike in cybersecurity complaints received by its Internet Crime Complaint Center (IC3) each day. According to the International Criminal Police Organization (Interpol), “[o]f global cyber-scams, 59% are coming in the form of spear phishing.” These phishing campaigns targeted an array of sectors, such as healthcare and government agencies, by imitating health experts or COVID-related charities.

Proactive steps can help businesses improve their cybersecurity hygiene and guard against phishing scams. One of these steps is for companies to focus part of their efforts on educating employees on how to detect and avoid malicious websites in phishing emails. Companies can start by building employee understanding of how to identify the destination domain of a URL (Uniform Resource Locator – commonly referring to as “links”) embedded in an email that may be malicious. URLs can be complex and confusing and cybercriminals, who are well aware of that complexity, often use deceptive tactics within the URLs to mask the malicious destination domain. Companies can take proactive steps to inform their employees of these deceptive tactics and help them avoid malicious websites. Some of the most common tactics are described in Table 1 below.

TacticWhat is it?
CombosquattingAdding words such as “secure,” “login” or “account” to a familiar domain name to trick users into thinking it is affiliated with the known domain name.
TyposquattingUsing domain names that resemble a familiar name but incorporate common typographical mistakes, such as reversing letters or leaving out or adding a character.
LevelsquattingUsing familiar names/domain names as part of a subdomain within a URL, making it difficult to discover the real destination domain.
Homograph attacksUsing homograph, or lookalike, domain names, such as substituting the uppercase “I” or number “1” where a lowercase “L” should have been used, or using “é” instead of an “e.”
Misplaced domainPlanting familiar domain names within the URL as a way of adding a familiar domain name into a complex-looking URL. The familiar domain name could be found in a path (after a “/”), as part of the additional parameters (after a “?”), as an anchor/fragment identifier (after a “#”) or in the HTTP credentials (before “@”).
URL-encoded charactersPlacing URL-encoded characters (%[code]), which are sometimes used in URL parameters, into the domain name itself.
Table 1. Common tactics used by cybercriminals to mask the destination domain.

Teaching users to find and understand the domain portion of the URL can have lasting and positive effects on an organization’s ability to avoid phishing links. By providing employees (and their families) with this basic information, companies can better protect themselves against cybersecurity issues such as compromised networks, financial losses and data breaches.

To learn more about what you can do to protect yourself and your business against possible cyber threats, check out the STOP. THINK. CONNECT. campaign online at https://www.stopthinkconnect.org. STOP. THINK. CONNECT. is a global online safety awareness campaign led by the National Cyber Security Alliance and in partnership with the Anti-Phishing Working Group to help all digital citizens stay safer and more secure online.

The post Cybersecurity Considerations in the Work-From-Home Era appeared first on Verisign Blog.

]]>