Defending against cyber threats is not only critical, but increasingly difficult and expensive. Just a quick glance at today’s news headlines and it is clear that these threats present numerous challenges to Internet users and the organizations that both serve and employ them. For example, in 2014, McAfee Labs observed a 75 percent year-over-year increase in new malware equating to 387 new threats per minute. Further, the Ponemon Institute estimates the average data breach costs large organizations $3.8 million per event.
Most solutions either require extensive investment or do not meet an organization’s constantly evolving needs. Traditional, appliance-based security solutions can require organizations to shell out considerable amounts of money, both in up-front capital expenditure and in on-going maintenance fees. Conversely, many managed cloud-based offerings do not provide the critical capability to customize the solution based on an organization’s specific business environment and security needs. Finally, do-it-yourself (DIY) open-source solutions suffer from constant patching and maintenance problems.
Enter the Verisign DNS Firewall, an easy-to-configure, cost effective managed cloud-based service that offers robust protection from unwanted content, malware and advanced persistent threats (APTs), delivered with the ability to customize filtering to suit an organization’s unique needs.
Perceptions can be difficult to change. People see the world through the lens of their own experiences and desires, and new ideas can be difficult to assimilate. Such is the case with the registration ecosystem. Today’s operational models exist because of decisions made over time, but the assumptions that were used to support those decisions can (and should) be continuously challenged to ensure that they are addressing today’s realities. Are we ready to challenge assumptions? Can the operators of registration services do things differently?
Two principles in computer security that help bound the impact of a security compromise are the principle of least privilege and the principle of minimum disclosure or need-to-know.
As described by Jerome Saltzer in a July 1974 Communications of the ACM article, Protection and the Control of Information Sharing in Multics, the principle of least privilege states, “Every program and every privileged user should operate using the least amount of privilege necessary to complete the job.”
There may be tradeoffs, of course, between minimizing the amount of privilege or information given to a component in a system, and other objectives such as performance or simplicity. For instance, a component may be able to do its job more efficiently if given more than the minimum amount. And it may be easier just to share more than is needed, than to extract out just the minimum required. The minimum amounts of privilege may also be hard to determine exactly, and they might change over time as the system evolves or if it is used in new ways.
Least privilege is well established in DNS through the delegation from one name server to another of just the authority it needs to handle requests within a specific subdomain. The principle of minimum disclosure has come to the forefront recently in the form of a technique called qname-minimization, which aims to improve privacy in the Domain Name System (DNS).
At Verisign, we’ve made the Domain Name System (DNS) our business for more than 17 years. We support the availability of critical Internet infrastructure like .com and .net top-level domains (TLDs) and the A and J Internet Root Servers, and we provide critical Managed DNS services that ensure the availability of externally facing websites to customers around the world.
As we continue to expand our role in Internet security, we are excited to announce the next step in protecting the stability of enterprise DNS ecosystems: Verisign Recursive DNS. This new cloud-based recursive DNS service leverages Verisign’s global, securely managed DNS infrastructure to offer the performance, reliability and security that enterprises demand when securing their internal networks and that communications safely and securely reach their intended destinations.
A network traffic analyzer can tell you what’s happening in your network, while a Domain Name System (DNS) analyzer can provide context on the “why” and “how.”
This was the theme of the recent Verisign Labs Distinguished Speaker Series discussion led by Paul Vixie and Robert Edmonds, titled Passive DNS Collection and Analysis – The “dnstap” Approach.
In Ripley Scott’s classic 1982 science fiction film Blade Runner, replicant Roy Batty (portrayed by Rutger Hauer) delivers this soliloquy:
“I’ve…seen things you people wouldn’t believe…Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those…moments…will be lost in time, like (cough) tears…in…rain. Time…to die.”
The WHOIS protocol was first published as RFC 812 in March 1982 – almost 33 years ago. It was designed for use in a simpler time when the community of Internet users was much smaller. WHOIS eventually became the default registration data directory for the Domain Name System (DNS). As interest in domain names and the DNS has grown over time, attempts have been made to add new features to WHOIS. None of these attempts have been successful, and to this day we struggle with trying to make WHOIS do things it was never designed to do.
UCLA and Washington University in St. Louis recently announced the launch of the Named Data Networking (NDN) Consortium, a new forum for collaboration among university and industry researchers, including Verisign, on one candidate next-generation information-centric architecture for the internet.
Verisign Labs has been collaborating with UCLA Professor Lixia Zhang, one of the consortium’s co-leaders, on this future-directed design as part our university research program for some time. The consortium launch is a natural next step in facilitating this research and its eventual application.
Van Jacobson, an Internet Hall of Fame member and the other co-leader of the NDN Consortium, surveyed developments in this area in his October 2012 talk in the Verisign Labs Distinguished Speaker Series titled, “The Future of the Internet? Content-Centric Networking.”
As I stated in my summary of the talk, content-centric networking and related research areas under the heading of information-centric networking and NDN bring internet protocols up to date to match the way many of us already are using the internet. As Van noted, when people want to access content over the internet– for instance the recording of his talk – they typically reference a URL, for instance http://www.youtube.com/watch?v=3zOLrQJ5kbU.
If you are trying to communicate anonymously on the internet using Tor, this paper may be an important read for you. Anonymity and privacy are at the core of what the Tor project promises its users. Short for The Onion Router, Tor provides individuals with a mechanism to communicate anonymously on the internet. As part of its offerings, Tor provides hidden services, specifically anonymous networking between servers that are configured to receive inbound connections only through Tor. In order to route requests to these hidden services, a namespace is used to identify the resolution requests to such services. Tor uses the .onion namespace under a non-delegated (pseudo) top-level-domain. Although the Tor system was designed to prevent .onion requests from leaking into the global DNS resolution process, numerous requests are still observed in the global DNS, causing concern about the severity of the leakage and the exposure of sensitive private data.