Dr. Burt Kaliski Jr.

Dr. Burt Kaliski Jr., Senior Vice President and Chief Technology Officer, leads Verisign’s long-term research program. Through the program’s innovation initiatives, the CTO organization, in collaboration with business and technology leaders across the company, explores emerging technologies, assesses their impact on the company’s business, prototypes and evaluates new concepts, and recommends new strategies and solutions. Burt is also responsible for the company’s industry standards engagements, university collaborations, and technical community programs.

Prior to joining Verisign in 2011, Burt served as the Founding Director of the EMC Innovation Network, the global collaboration among EMC’s research and advanced technology groups and its university partners. He joined EMC from RSA Security, where he served as Vice President of Research and Chief Scientist. Burt started his career at RSA in 1989, where, as the founding scientist of RSA Laboratories, his contributions included the development of the Public-Key Cryptography Standards, now widely deployed in internet security.

Burt has held appointments as a guest professor at Wuhan University’s College of Computer Science and as a guest professor and member of the international advisory board of Peking University's School of Software and Microelectronics. He has also taught at Stanford University and Rochester Institute of Technology. Burt was Program Co-chair of Cryptographic Hardware and Embedded Systems 2002, Chair of the Institute of Electrical and Electronics Engineers P1363 working group, Program Chair of CRYPTO ’97, and General Chair of CRYPTO ’91. He has also served on the scientific advisory board of QEDIT, a privacy-enhancing technology provider.

Burt is a member of the Association for Computing Machinery, a senior member of the IEEE Computer Society, and a member of Tau Beta Pi.

Burt received his PhD, Master of Science and Bachelor of Science degrees in computer science from the Massachusetts Institute of Technology, where his research focused on cryptography.


Recent posts by Dr. Burt Kaliski Jr.:

Pioneering Technologies for the Long Term

We recently hosted Dr. Ralph Merkle as a guest speaker for the Verisign Labs Distinguished Speaker Series. His talk, “Quantum Computers and Public-Key Cryptosystems,” was a great presentation on how molecular nanotechnology — the ability to economically manufacture most arrangements of atoms permitted by physical law — could fundamentally alter the world as we know it. Ralph’s and many others’ research on this topic has been groundbreaking and we are grateful he took the time to come and share his knowledge.

(more…)

Part 4 of 4 – Conclusion: SLD Blocking Is Too Risky without TLD Rollback

ICANN’s second-level domain (SLD) blocking proposal includes a provision that a party may demonstrate that an SLD not in the initial sample set could cause “severe harm,” and that SLD can potentially be blocked for a certain period of time. The extent to which that provision would need to be exercised remains to be determined. However, given the concerns outlined in Part 2 and Part 3 of this series, it seems likely that there could be many additions (and deletions!) from the blocked list given the lack of correlation between the DITL data and actual at-risk queries.

(more…)

vBSDcon: Builders and Archaeologists

Fascinating tour of C compiler evolution by David Chisnall http://vrsn.cc/1dUb5rY @Verisign‘s #vBSDcon. Compatible with DOS or VAX?

I began my journey into computer science as a high school freshman coding on a TI-59 calculator. Later in my high school years, I wrote computer chess games on a PDP-11/34 minicomputer in BASIC and, for speed, in assembly language. I might have contributed inadvertently to the Y2K problem with some FORTRAN and COBOL programs I wrote in the early 1980s. In college, I learned LISP and CLU on a MULTICS operating system, and had a part-time job where I programmed on a VAX-11/750. But eventually I did get around to coding in C on a Unix box.

So this is a little more information than 140 characters would allow, which may explain why I found David Chisnall’s opening talk at the recent vBSDcon so fascinating. DOS and VAX are to computer professionals what the classics are to the liberal arts: our Iliad and Odyssey. And C and Unix, in their various forms, are the living languages that preserve the connection to the early days – the contemporary variants of Koine Greek. The art of building C compilers as well as operating systems continues to advance skillfully.

(more…)

Part 1 of 4 – Introduction: ICANN’s Alternative Path to Delegation

As widely discussed recently, observed within the ICANN community several years ago, and anticipated in the broader technical community even earlier, the introduction of a new generic top-level domain (gTLD) at the global DNS root could result in name collisions with previously installed systems. Such systems sometimes send queries to the global DNS with domain name suffixes that, under reasonable assumptions at the time the systems were designed, may not have been expected to be delegated as gTLDs. The introduction of a new gTLD may conflict with those assumptions, such that the newly delegated gTLD collides with a domain name suffix in use within an internal name space, or one that is appended to a domain name as a result of search-list processing.

(more…)

Part 3 of 4 – Name Collision Mitigation Requires Qualitative Analysis

As discussed in the several studies on name collisions published to date, determining which queries are at risk, and thus how to mitigate the risk, requires qualitative analysis (New gTLD Security and Stability Considerations; New gTLD Security, Stability, Resiliency Update: Exploratory Consumer Impact AnalysisName Collisions in the DNS). Blocking a second level domain (SLD) simply on the basis that it was queried for in a past sample set runs a significant risk of false positives. SLDs that could have been delegated safely may be excluded on quantitative evidence alone, limiting the value of the new gTLD until the status of the SLD can be proven otherwise.

Similarly, not blocking an SLD on the basis that it was not queried for in a past sample set runs a comparable risk of false negatives.

A better way to deal with the risk is to treat not the symptoms but the underlying problem: that queries are being made by installed systems (or internal certificates are being employed by them) under the assumption that certain gTLDs won’t be delegated.

(more…)

web network

Part 2 of 4 – DITL Data Isn’t Statistically Valid for This Purpose

For several years, DNS-OARC has been collecting DNS query data “from busy and interesting DNS name servers” as part of an annual “Day-in-the-Life” (DITL) effort (an effort originated by CAIDA in 2002) that I discussed in the first blog post in this series. DNS-OARC currently offers eight such data sets, covering the queries to many but not all of the 13 DNS root servers (and some non-root data) over a two-day period or longer each year from 2006 to present.  With tens of billions of queries, the data sets provide researchers with a broad base of information about how the world is interacting with the global DNS as seen from the perspective of root and other name server operators.

In order for second-level domain (SLD) blocking to mitigate the risk of name collisions for a given gTLD, it must be the case that the SLDs associated with at-risk queries occur with sufficient frequency and geographical distribution to be captured in the DITL data sets with high probability. Because it is a purely quantitative countermeasure, based only on the occurrence of a query, not the context around it, SLD blocking does not offer a model for distinguishing at-risk queries from queries that are not at risk.  Consequently, SLD blocking must make a stronger assumption to be effective:  that any queries involving a given SLD occur with sufficient frequency and geographical distribution to be captured with high probability.

Put another way, the DITL data set – limited in time to an annual two-day period and in space to the name servers that participate in the DITL study – offers only a sample of the queries from installed systems, not statistically significant evidence of their behavior and of which at-risk queries are actually occurring.

(more…)

Diversity, Openness and vBSDcon 2013

“There never were in the world two opinions alike, no more than two hairs or two grains; the most universal quality is diversity”

–Michel Eyquem, seigneur de Montaigne (1533–1592)

Diversity is a central design principle of the Domain Name System. With respect to the DNS root, it’s the reason that there are 13 separately managed root servers with 12 independent operators. It’s the reason Verisign operates the two root servers we’re responsible for – the A and J roots – as well as other name servers – at multiple locations around the world. It’s also the reason that within these locations operated by Verisign, multiple physical servers handle the incoming traffic. And it’s the reason that among these multiple servers, we use multiple hardware and software platforms, as well as multiple network providers.

In other words, diversity is one reason the DNS industry in general, and Verisign in particular, doesn’t do everything the same way and in the same place.

(more…)

Improving the Internet, In Person and Online

As much as the world has become more connected, so that people across the world can collaborate online at any hour of the day (even in the midst of weather events like Sandy), there’s still an important role for conferences that bring people together in person at a specific time and place.

I’ve been reminded of the value of this technical “networking” as I’ve attended some key events related to my own work in recent weeks.

In mid-October, I spent some time at the ICANN 45 meeting in Toronto, the triannual focal point for industry work on domain names (as well as IP “numbers”, the second “N”). Pat Kane, senior vice president and general manager of Verisign’s Naming Services, describes his experiences at this important series as exemplifying “hard work and collaboration.” Good technical consensus, as I’ve learned through my past years in industry forums in cryptography and security, starts with trust. The many introductions and conversations that I enjoyed throughout my visit built on this value.

(more…)

The Promise of a Better Connected Digital World

Earlier this year, Verisign announced its 2012 Internet Infrastructure Grant program, which called for proposals for basic research with “potential to improve the availability and security of internet access in all parts of the world.” Two proposals would be selected based on criteria of relevance, innovation, feasibility, and overall quality.

It’s my honor now to announce that the program’s distinguished judging panel has reached its decisions. The awards will go to:

  • Converged, Secure Mobile Communication Support Through Infrastructure-opportunistic, DHT-based Network Services led by Prof. Z. Morley Mao, University of Michigan (United States) and Prof. Cui Yong, Tsinghua University (China)
  • Downscaling Entity Registries for Poorly-Connected Environments led by Prof. Dr. Philippe Cudré-Mauroux, Director, eXascale Infolab, University of Fribourg (Switzerland) and Dr. Christophe Guéret, Vrije Universiteit Amsterdam (The Netherlands)

(more…)

Do We Need An IPv6 Flag Day?

In recent interviews about World IPv6 Launch I’ve been asked by several different people whether or not I think there needs to be some kind of a “Flag Day” on which the world all together switches from Internet Protocol version 4 (IPv4) to the version 6 (IPv6).

I don’t think a flag day is needed. World IPv6 Launch is just the right thing.

It’s worth looking at some previous flag-type days to get a better sense of why.

(more…)