Danny McPherson leads Verisign’s technology and security organizations. He is responsible for Verisign's corporate and production infrastructure, platforms, services, engineering and operations, as well as information and corporate security. He has actively participated in internet operations, research and standardization since the early 1990s, including serving on the Internet Architecture Board and chairing an array of Internet Engineering Task Force and other working groups and committees. He has authored several books and numerous patents, internet protocol standards and network and security research publications.
Prior to joining Verisign in 2010, Danny was Chief Security Officer at Arbor Networks, where he developed solutions to detect and mitigate cyberattacks. He also performed pioneering research on internet infrastructure evolution, as well as botnet and malware collection and analysis. Before that, he held technical leadership positions in architecture, engineering and operations with Amber Networks, Qwest Communications, Genuity, MCI Communications and the U.S. Army Signal Corps.
The Domain Name System has provided the fundamental service of mapping internet names to addresses from almost the earliest days of the internet’s history. Billions of internet-connected devices use DNS continuously to look up Internet Protocol addresses of the named resources they want to connect to — for instance, a website such as blog.verisign.com. Once a device has the resource’s address, it can then communicate with the resource using the internet’s routing system.
Just as ensuring that DNS is secure, stable and resilient is a priority for Verisign, so is making sure that the routing system has these characteristics. Indeed, DNS itself depends on the internet’s routing system for its communications, so routing security is vital to DNS security too.
At Verisign we take our Internet stewardship mission very seriously, so when details emerged over the past week concerning the XcodeGhost infection, researchers at Verisign iDefense wanted to help advance community research efforts related to the XcodeGhost issue, and leveraging our unique capabilities, offer a level of public service to help readers determine their current and historical level of exposure to the infection.
Background
First identified in recent days on the Chinese microblog site Sina Weibo, XcodeGhost is an infection of Xcode, the framework developers use to create apps for Apple’s iOS and OS X operating systems. Most developers download secure Xcode from Apple. However, some acquire unofficial versions from sites with faster download speeds.
Apps created with XcodeGhost contain instructions, unknown to both the app developers and the end users, that collect potentially sensitive information from the user’s device and send it to command-and-control (C2) servers managed by the XcodeGhost operator. This way, the XcodeGhost operators circumvented the security of Apple’s official Xcode distribution, and the security of Apple’s App Store.
The infection had widespread impact. As of September 25th, Palo Alto Networks and Fox-IT had identified more than 87 infected apps by name, and FireEye claimed to have identified more than 4,000 infected apps. This activity impacts millions of users both in China and elsewhere in the world. To understand key aspects of the infection, iDefense researchers leveraged authoritative DNS traffic patterns to the C2 domains.
It has been another busy quarter for the team that works on our DDoS Protection Services here at Verisign. As detailed in the recent release of our Q2 2014 DDoS Trends Report, from April to June of this year, we not only saw a jump in frequency and size of attacks against our customers, we witnessed the largest DDoS attack we’ve ever observed and mitigated – an attack over 300 Gbps against one of our Media and Entertainment customers.
Recent attacks targeting enterprise websites have created greater awareness around how critical DNS is for the reliability of internet services and the potentially catastrophic impact of a DNS outage. The DNS, made up of a complex system of root and lower level name servers, translates user-friendly domain names to numerical IP addresses. With few exceptions, DNS lives in a grey area between IT and network operations. With the increasing occurrences of distributed denial of service (DDoS) attacks, advanced persistent threats (APTs) and exploitation of user errors through techniques such as typosquatting and phishing, enterprises can no longer take a passive role in managing their DNS internet infrastructure.
Since we published our second SSR report a few weeks back, recently updated with revision 1.1, we’ve been taking a deeper look at queries to the root servers that elicit “Name Error,” or NXDomain responses and figured we’d share some preliminary results.
Throughout this series of blog posts we’ve discussed a number of issues related to security, stability and resilience of the DNS ecosystem, particularly as we approach the rollout of new gTLDs. Additionally, we highlighted a number of issues that we believe are outstanding and need to be resolved before the safe introduction of new gTLDs can occur – and we tried to provide some context as to why, all the while continuously highlighting that nearly all of these unresolved recommendations came from parties in addition to Verisign over the last several years. We received a good bit of flack from a small number of folks asking why we’re making such a stink about this, and we’ve attempted to meter our tone while increasing our volume on these matters. Of course, we’re not alone in this, as a growing list of others have illustrated, e.g., SSAC SAC059’s Conclusion, published just a little over 90 days ago, illustrates this in part:
The SSAC believes that the community would benefit from further inquiry into lingering issues related to expansion of the root zone as a consequence of the new gTLD program. Specifically, the SSAC recommends those issues that previous public comment periods have suggested were inadequately explored as well as issues related to cross-functional interactions of the changes brought about by root zone growth should be examined. The SSAC believes the use of experts with experience outside of the fields on which the previous studies relied would provide useful additional perspective regarding stubbornly unresolved concerns about the longer-term management of the expanded root zone and related systems.
In 2010, ICANN’s Security and Stability Advisory Committee (SSAC) published SAC045, a report calling attention to particular problems that may arise should a new gTLD applicant use a string that has been seen with measurable (and meaningful) frequency in queries for resolution by the root system. The queries to which they referred involved invalid top-level domain (TLD) queries (i.e., non-delegated strings) at the root level of the domain name system (DNS), queries which elicit responses commonly referred to as Name Error, or NXDomain, responses from root name servers.
Do you recall when you were a kid and you experienced for the first time an unnatural event where some other kid “stole” your name and their parents were now calling their child by your name, causing much confusion for all on the playground? And how this all made things even more complicated – or at least unnecessarily complex when you and that kid shared a classroom and teacher, or street, or coach and team, and just perhaps that kid even had the same surname as you, amplifying the issue! What you were experiencing was a naming collision (in meatspace).
For nearly all communications on today’s internet, domain names play a crucial role in providing stable navigation anchors for accessing information in a predictable and safe manner, irrespective of where you’re located or the type of device or network connection you’re using. The underpinnings of this access are made possible by the Domain Name System (DNS), a behind the scenes system that maps human-readable mnemonic names (e.g.,www.Verisign.com) to machine-usable internet addresses (e.g., 69.58.187.40). The DNS is on the cusp of expanding profoundly in places where it’s otherwise been stable for decades and absent some explicit action may do so in a very dangerous manner.
Verisign recently published a technical report on new generic top-level domain (gTLD) security and stability considerations. The initial objective of the report was to assess for Verisign’s senior management our own operational preparedness for new gTLDs, as both a Registry Service Provider for approximately 200 strings, as well as a direct applicant for 14 new gTLDs (including 12 internationalized domain name (IDN) transliterations of .com and .net). The goal was to help ensure our teams, infrastructure and processes are prepared for the pilot and general pre-delegation testing (PDT) exercises, various bits of which are underway, and the subsequent production delegations and launch of new gTLDs shortly thereafter.