Dr. Burt Kaliski Jr.

Senior Vice President and Chief Technology Officer.

Dr. Burt Kaliski Jr., senior vice president and chief technology officer (CTO), leads Verisign’s long-term research program. Through the program’s innovation initiatives, the CTO organization, in collaboration with business and technology leaders across the company, explores emerging technologies, assesses their impact on the company’s business, prototypes and evaluates new concepts, and recommends new strategies and solutions. Burt is also responsible for the company’s industry standards engagements, university collaborations and technical community programs.

Prior to joining Verisign in 2011, Burt served as the founding director of the EMC Innovation Network, the global collaboration among EMC’s research and advanced technology groups and its university partners. He joined EMC from RSA Security, where he was vice president of research and chief scientist. Burt started his career at RSA in 1989, where, as the founding scientist of RSA Laboratories, his contributions included the development of the Public-Key Cryptography Standards (PKCS), now widely deployed in internet security.

Burt has held appointments as a guest professor at Wuhan University’s College of Computer Science and as a guest professor and member of the international advisory board of Peking University's School of Software and Microelectronics. He has also taught at Stanford University and Rochester Institute of Technology. Burt was program co-chair of Cryptographic Hardware and Embedded Systems (CHES) 2002, chair of the Institute of Electrical and Electronics Engineers (IEEE) P1363 working group, program chair of CRYPTO ’97, and general chair of CRYPTO ’91. He currently serves on the scientific advisory board of QEDIT, a privacy-enhancing technology provider.

Burt is a member of the Association for Computing Machinery, a senior member of the IEEE Computer Society, and a member of Tau Beta Pi. Burt received a PhD, Master and Bachelor of Science degrees in computer science from the Massachusetts Institute of Technology (MIT), where his research focused on cryptography.

Recent posts by Dr. Burt Kaliski Jr.:

Colloquium on Collisions: Expert Panelists to Select Papers, Award $50K First Prize

According to the Online Etymology Dictionary, the verb collide is derived from the Latin verb collidere, which means, literally, “to strike together”:  com- “together” + lædere “to strike, injure by striking.”

Combined instead with loquium, or “speaking,” the com- prefix produces the Latin-derived noun colloquy: “a speaking together.”

Researchers and practitioners know well the benefits of the colloquium, the technical conference, a gathering of those speaking together on a topic.

So consider WPNC 14 – the upcoming namecollisions.net workshop – a colloquium on collisions: speaking together to keep name spaces from striking together.


Insights on the Technology in the Real World

At each of our Verisign Labs’ Distinguished Speaker Series events I learn something new that stays with me and helps shape my thinking about technology and its impact on the world. The most recent brought the benefit of three insights, as the expanded event, Advancing Internet Technologies in the Developing World, featured a keynote speaker as well as two recipients of Verisign’s Infrastructure Grants.


Collisions Ahead: Look Both Ways before Crossing

Many years ago on my first trip to London, I encountered for the first time signs that warned pedestrians that vehicles might be approaching in a different direction than they were accustomed to in their home countries, given the left-versus-right-side driving patterns around the world. (I wrote a while back about one notable change from left-to-right, the Swedish “H Day,” as a comment on the IPv6 transition.)

If you’re not sure on which side to expect the vehicles, it’s better to look both ways — and look again — if you want to reduce the risk of a collision.


Rewarding Research: A Better Connected World, Name Collisions and Beyond

It’s a privilege for Verisign to welcome this week the recipients of our 2012 Internet Infrastructure Grant program, who will be presenting the results of research their teams have conducted over the past year and a half.  The results will be the focus of our fourth and final Verisign Labs Distinguished Speaker Series event for the year.

The event will open with a keynote talk by Prof. Ellen Zegura of Georgia Tech (United States), who will give an overview of the field these two projects explore, “Intermittent and Low-Resource Networks: Theory and Practice.” It’s an honor to have Prof. Zegura with us to describe both the academic and hands-on work she’s conducted in this important area.


Pioneering Technologies for the Long Term

We recently hosted Dr. Ralph Merkle as a guest speaker for the Verisign Labs Distinguished Speaker Series. His talk, “Quantum Computers and Public-Key Cryptosystems,” was a great presentation on how molecular nanotechnology — the ability to economically manufacture most arrangements of atoms permitted by physical law — could fundamentally alter the world as we know it. Ralph’s and many others’ research on this topic has been groundbreaking and we are grateful he took the time to come and share his knowledge.


Part 4 of 4 – Conclusion: SLD Blocking Is Too Risky without TLD Rollback

ICANN’s second-level domain (SLD) blocking proposal includes a provision that a party may demonstrate that an SLD not in the initial sample set could cause “severe harm,” and that SLD can potentially be blocked for a certain period of time. The extent to which that provision would need to be exercised remains to be determined. However, given the concerns outlined in Part 2 and Part 3 of this series, it seems likely that there could be many additions (and deletions!) from the blocked list given the lack of correlation between the DITL data and actual at-risk queries.


vBSDcon: Builders and Archaeologists

Fascinating tour of C compiler evolution by David Chisnall http://vrsn.cc/1dUb5rY @Verisign‘s #vBSDcon. Compatible with DOS or VAX?

I began my journey into computer science as a high school freshman coding on a TI-59 calculator. Later in my high school years, I wrote computer chess games on a PDP-11/34 minicomputer in BASIC and, for speed, in assembly language. I might have contributed inadvertently to the Y2K problem with some FORTRAN and COBOL programs I wrote in the early 1980s. In college, I learned LISP and CLU on a MULTICS operating system, and had a part-time job where I programmed on a VAX-11/750. But eventually I did get around to coding in C on a Unix box.

So this is a little more information than 140 characters would allow, which may explain why I found David Chisnall’s opening talk at the recent vBSDcon so fascinating. DOS and VAX are to computer professionals what the classics are to the liberal arts: our Iliad and Odyssey. And C and Unix, in their various forms, are the living languages that preserve the connection to the early days – the contemporary variants of Koine Greek. The art of building C compilers as well as operating systems continues to advance skillfully.


Part 1 of 4 – Introduction: ICANN’s Alternative Path to Delegation

As widely discussed recently, observed within the ICANN community several years ago, and anticipated in the broader technical community even earlier, the introduction of a new generic top-level domain (gTLD) at the global DNS root could result in name collisions with previously installed systems. Such systems sometimes send queries to the global DNS with domain name suffixes that, under reasonable assumptions at the time the systems were designed, may not have been expected to be delegated as gTLDs. The introduction of a new gTLD may conflict with those assumptions, such that the newly delegated gTLD collides with a domain name suffix in use within an internal name space, or one that is appended to a domain name as a result of search-list processing.


Part 3 of 4 – Name Collision Mitigation Requires Qualitative Analysis

As discussed in the several studies on name collisions published to date, determining which queries are at risk, and thus how to mitigate the risk, requires qualitative analysis (New gTLD Security and Stability Considerations; New gTLD Security, Stability, Resiliency Update: Exploratory Consumer Impact AnalysisName Collisions in the DNS). Blocking a second level domain (SLD) simply on the basis that it was queried for in a past sample set runs a significant risk of false positives. SLDs that could have been delegated safely may be excluded on quantitative evidence alone, limiting the value of the new gTLD until the status of the SLD can be proven otherwise.

Similarly, not blocking an SLD on the basis that it was not queried for in a past sample set runs a comparable risk of false negatives.

A better way to deal with the risk is to treat not the symptoms but the underlying problem: that queries are being made by installed systems (or internal certificates are being employed by them) under the assumption that certain gTLDs won’t be delegated.


web network

Part 2 of 4 – DITL Data Isn’t Statistically Valid for This Purpose

For several years, DNS-OARC has been collecting DNS query data “from busy and interesting DNS name servers” as part of an annual “Day-in-the-Life” (DITL) effort (an effort originated by CAIDA in 2002) that I discussed in the first blog post in this series. DNS-OARC currently offers eight such data sets, covering the queries to many but not all of the 13 DNS root servers (and some non-root data) over a two-day period or longer each year from 2006 to present.  With tens of billions of queries, the data sets provide researchers with a broad base of information about how the world is interacting with the global DNS as seen from the perspective of root and other name server operators.

In order for second-level domain (SLD) blocking to mitigate the risk of name collisions for a given gTLD, it must be the case that the SLDs associated with at-risk queries occur with sufficient frequency and geographical distribution to be captured in the DITL data sets with high probability. Because it is a purely quantitative countermeasure, based only on the occurrence of a query, not the context around it, SLD blocking does not offer a model for distinguishing at-risk queries from queries that are not at risk.  Consequently, SLD blocking must make a stronger assumption to be effective:  that any queries involving a given SLD occur with sufficient frequency and geographical distribution to be captured with high probability.

Put another way, the DITL data set – limited in time to an annual two-day period and in space to the name servers that participate in the DITL study – offers only a sample of the queries from installed systems, not statistically significant evidence of their behavior and of which at-risk queries are actually occurring.