Short lived certificates cannot solve the “un-aware victim” problem – 2

| Posted by , under Uncategorized

My response to a recent blog has resulted in a quick response that I feel obliged to respond.

First of all, the positive: Agreement on the need to invest on a new revocation infrastructure.

“Basically, I see the value in revocation checking and think the investments need to be made to make it work and be relevant for the WebPKI of today…….”  

Now lets re-cap what the heck we are talking about……

Facts:
-Today the revocation infrastructure is not capable of meeting today’s needs.
-Today there is no standards for revocation policies. (Although there are standards for “issuance” but not for “revocation”)

These two points are well documented and accepted viewpoints. No arguments here…

Solution recommended in order to solve the lack of scalable revocation infrastructure problem was “Short lived certs”. Where the author stated the following:

“If we want a scalable solution to the revocation problem then it’s probably going to come in the form of short-lived certificates”

So far so good……in a world where we don’t have a scalable revocation infrastructure, short lived certs makes sense. Of course the cost of short lived certs is very high as  change the whole computing infrastructure so that certificates are renewed on a daily basis (daily for it to be secure enough vs 90 day certificates in my view) and introduce this new moving part that might cause vulnerability and operational issues. It is a big risk to the computing infrastructure, very BIG risk! But I do agree if there was no scalable revocation infrastructure we would have no option but to take this huge risk.

AnD this is where the Confusion ariSe…… We believe in the ingenuity of scientists and engineers  to solve the revocation Problem and make it scalable and enable this new scalable revocation infrastructure to serve the internet.

Enter the New world of “Scalable Revocation”…

The world we want to create is where we have scaleable revocation infrastructure that can serve browsers, operating systems and IoT devices in 32 bytes.

The world we want to create is where we have standards for policies for revocations, coupled with SLAs and CAs are audited against. These SLAs could range from “awareness to revocation time” to “response times of technology”.

The world we want to create is where we have categories of reasons for revocations so that browsers are informed about the revocation reasons where browsers can make a more informed decision as to what to do with the certificate.

The world we want to create is where subscriber agreements enforce the revocation standards hence enable CAs to revoke certificates for any legitimate reasons like protocol vulnerabilities, so that we don’t have to wait until “natural expiration” to expunge vulnerable certificates.

In this new world, having Short lived certificates add no value. There simply is no practical use case for its use.  Contrary it potentially increases the risk due to changes required in the computing infrastructure and introduce yet another moving part into complex computing ecosystem exposing businesses to more risk.

In conclusion, I would like to re-emphasize that I too whole heartedly agree that we need to invest in a brand new Revocation infrastructure and I would welcome new policies and standards to go along with brand new revocation infrastructure.

Melih

Time for questions…..

But this time I was the one asking the question…..to someone who saved us billions of bytes because he can’t spell 😉   I am talking about Phillip Hallam Baker who made the spelling mistake when he was helping with the design of the HTTP protocol, where he spelt the referer field in the http protocol  with one ‘r’ instead of two ‘rr’ (referer vs referrer) . I asked him about SSL Accelerators and their uses in the 90s…and here is his response…

 

I note with some interest and more than a little surprise, recent claims made about the history of SSL. I am surprised because not only is the alternative history not true, it makes very little sense.

For example, it has been asserted that the original design of the SSL PKI was predicated on the use of SSL accelerators which provided protection against accidental release of a private key. This is not only wrong, it is surely obvious that nobody was designing SSL accelerator hardware until there was an SSL protocol to accelerate.

When I started work on Web security in 1992, the industry standard HSM was a BBN safekeyper box costing somewhere north of $20,000. They were large and exquisitely engineered. The ignition keys used to activate the device were little engineering marvels in their own right. And they were slow. Generating a 4096 bit key on a BBN safekeyper box took seven hours on one occasion. I think it is safe to assume nobody was using Safekeyper boxes for hardware acceleration”

Despite the lack of FIPS 140 Level 3 certified modules, Microsoft’s CryptoAPI was designed to support use of cryptographic hardware from the first release. Safekeyper boxes were as rare as hen’s teeth but many analysts were loudly predicting smartcards would be the future. Microsoft did not succeed in making smartcards ubiquitous on the desktop. But that was not for lack of effort. One member of the security team lamented that she had pushed for the company to get into the keyboard business so that they could sell one with a smartcard reader but nobody wanted to buy them.

The software based Cryptographic Service Provider for RSA allowed keys to be marked ‘not for export’ from a very early release if not the first. The CAPI features used to protect private keys were expanded and exposed as a separate API in Windows 2000 as the Data Protection API. The idea that a computer system should allow private keys to be bound to a particular machine so that they cannot be supported is surely an obvious benefit. Apple has adopted the same approach. It is rather a pity that (to date) no similar feature has been available as a pervasive Linux capability.

The dotCom gold rush spurred many companies to start work on SSL accelerator products but these only began to reach the market after the dotCom boom was already underway. FIPS hardware certifications take some time to acquire and the market was demanding speed rather than security. I am not aware of key protection features in SSL accelerator boards having any impact on the architecture of the WebPKI for the simple reason that most of the early products didn’t support it. The accelerator was typically kicked into action by loading in the private key from a file on disk during the boot sequence. Key protection only became ubiquitous on SSL accelerator products after they were no longer required for performance.

I am also curious to know how Valicert could be the ‘lead creators’ of OCSP when the company was formed in 1996 and the original OCSP protocol designed by Mike Meyers and Warwick Ford at VeriSign was designed to support Microsoft’s Authenticode which launched in 1996. I wrote the first Internet Draft for OSCP from Mike’s original design notes. The main change ValiCert made was to convert my sane encoding into ASN.1 (which not coincidentally is 1.NSA backwards) and to strip out support for using the same infrastructure as a validation protocol. As a result, PKIX ended up with two separate query/response protocol when the same requirements would have been much better met by a single one.

Phillip Hallam-Baker