On Nov. 30 and Dec. 1, 2015, some of the Internet’s Domain Name System (DNS) root name servers received large amounts of anomalous traffic. Last week the root server operators published a report on the incident. In the interest of further transparency, I’d like to take this opportunity to share Verisign’s perspective, including how we identify, handle and react, as necessary, to events such as this.
The Domain Name System (DNS) offers ways to significantly strengthen the security of Internet applications via a new protocol called the DNS-based Authentication of Named Entities (DANE). One problem it helps to solve is how to easily find keys for end users and systems in a secure and scalable manner. It can also help to address well-known vulnerabilities in the public Certification Authority (CA) model. Applications today need to trust a large number of global CAs. There are no scoping or naming constraints for these CAs – each one can issue certificates for any server or client on the Internet, so the weakest CA can compromise the security of the whole system. As described later in this article, DANE can address this vulnerability.
For consumers who are increasingly impatient and expect a website to load within two seconds or less, the majority will quickly abandon a slow-loading page along with their shopping cart, resulting in lost revenue. With so many potential problems to slow down your site, the domain name system (DNS) doesn’t have to be one of them.
What is DNS?
DNS is the Internet’s equivalent to a phone book. It maintains a directory of domain names and translates them to their respective Internet Protocol (IP) addresses, enabling the end user to access a desired Web page. Any disruption to the DNS during the holiday season can be disastrous for retailers.
“DNS is the Achilles’ heel of the Web, often forgotten, and its impact on website performance is ignored until it breaks down,” explains Mehdi Daoudi, CEO of Web performance monitoring firm Catchpoint. However, it doesn’t have to be.
A comprehensive defense-in-depth strategy requires security mechanisms to be applied through the implementation of hardware, software and security policies. Hardware protection includes, but is not limited to, the implementation of next generation firewalls (NGFW), intrusion prevention systems/intrusion detection systems (IPS/IDS) and secure Web gateways (SWG). Software-based protection is done through anti-virus software deployments, automated patch management or tools for Internet monitoring. Finally, no defense-in-depth strategy would be complete without the implementation of strong security policies that prescribe processes for incident reporting, service and system audits, and security awareness training.
Today’s new age of ubiquitous connectivity has created an insatiable and growing demand among employees and consumers to be online with familiar systems and tools at all times. Employees are no longer satisfied with the limited choices in devices and tools provided to them by their corporate IT organizations. They want to use what they want,when they want. They believe that choosing their own devices and tools provides them with the highest level of comfort and efficiency. This desire to use personal devices in work environments, referred to as “bring your own device (BYOD),” coupled with the growing cyber-attack surface, poses significant challenges to IT organizations. These challenges are leading such organizations to ask themselves – Are we ready to support BYOD?
Even though summer is just heating up, Internet retailers already have visions of dollar signs dancing in their heads as they prepare for the onslaught of holiday Web traffic that will soon ring in the 2015 holiday season. However, much of their focus is on marketing, and not the critical security measures they need to have in place to help keep their customers safe and satisfied as they shop online during the holidays.
As we have seen from the numerous security breaches and cyberattacks reported during last year’s holiday season, understanding the threat landscape and putting appropriate mitigation plans in place is critical to a business’s revenue and reputation. Just one hour of network downtime due to an outage or malicious attack can have far reaching consequences for a retailer, especially during the holidays.
Defending against cyber threats is not only critical, but increasingly difficult and expensive. Just a quick glance at today’s news headlines and it is clear that these threats present numerous challenges to Internet users and the organizations that both serve and employ them. For example, in 2014, McAfee Labs observed a 75 percent year-over-year increase in new malware equating to 387 new threats per minute. Further, the Ponemon Institute estimates the average data breach costs large organizations $3.8 million per event.
Most solutions either require extensive investment or do not meet an organization’s constantly evolving needs. Traditional, appliance-based security solutions can require organizations to shell out considerable amounts of money, both in up-front capital expenditure and in on-going maintenance fees. Conversely, many managed cloud-based offerings do not provide the critical capability to customize the solution based on an organization’s specific business environment and security needs. Finally, do-it-yourself (DIY) open-source solutions suffer from constant patching and maintenance problems.
Enter the Verisign DNS Firewall, an easy-to-configure, cost effective managed cloud-based service that offers robust protection from unwanted content, malware and advanced persistent threats (APTs), delivered with the ability to customize filtering to suit an organization’s unique needs.
Perceptions can be difficult to change. People see the world through the lens of their own experiences and desires, and new ideas can be difficult to assimilate. Such is the case with the registration ecosystem. Today’s operational models exist because of decisions made over time, but the assumptions that were used to support those decisions can (and should) be continuously challenged to ensure that they are addressing today’s realities. Are we ready to challenge assumptions? Can the operators of registration services do things differently?
— Burt Kaliski Jr. (@modulomathy) March 22, 2015
As described by Jerome Saltzer in a July 1974 Communications of the ACM article, Protection and the Control of Information Sharing in Multics, the principle of least privilege states, “Every program and every privileged user should operate using the least amount of privilege necessary to complete the job.”
There may be tradeoffs, of course, between minimizing the amount of privilege or information given to a component in a system, and other objectives such as performance or simplicity. For instance, a component may be able to do its job more efficiently if given more than the minimum amount. And it may be easier just to share more than is needed, than to extract out just the minimum required. The minimum amounts of privilege may also be hard to determine exactly, and they might change over time as the system evolves or if it is used in new ways.
Least privilege is well established in DNS through the delegation from one name server to another of just the authority it needs to handle requests within a specific subdomain. The principle of minimum disclosure has come to the forefront recently in the form of a technique called qname-minimization, which aims to improve privacy in the Domain Name System (DNS).