Ron was wrong, Whit is right, really?

Recently Arjen Lenstra and colleagues published a paper titled "Ron was wrong, Whit is right." In this paper they obtained from various sources a large number of public keys which they compared to each other.

"Public Keys" are used in many applications including security for the Web. In theory public keys are should be derived from a source of true "random numbers." If this is the case, no two keys should be identical.

Yet, they found thousands of keys which were in fact identical. Now many of these may be the result of "default" keys installed in devices at the factory and then never changed (sort of like factory passwords and keys that are never changed). However that doesn't appear to account for all of them.

So what does it mean if two devices (websites or phones or whatever) share the same public key. Well it implies that they share a common "private" key (the other half of a public key, the part that is supposed to be a secret). The first order issue is that the owners of these devices can read each others traffic (web traffic, e-mail, whatever is protected by the shared key). However if you are the owner of a key which is shared, you probably don't know the other person who shares the key, so the risk is low.

In the real world, physical keys are often re-used. If you own a car, there are probably many other cars out there that your car key will fit. You just don't know where to try!

However there is a special and important case here. In the case of the RSA cipher, the keys are derived from a pair of two randomly chosen large prime numbers. Lentra and company discovered thousands of RSA keys where only ONE of the primes was the same. This is a very big problem! Why? Because if I have in my possession two RSA public keys that share a common prime (and I am not the owner of either key) then I can trivially derive the private key for both keys!

This means that both keys are completely insecure because anyone on the Internet can compute the private keys, destroying the security of the keys, period! In their paper Lentra and company point out that gathering the millions of keys that they gathered was not hard and they suspect that intelligence agencies have probably already done this exercise and are in a position to exploit any of the RSA keys that they discovered that share a common prime number.

This is where the paper derives its title. Ron, being a reference to Ron Rivest, one of the authors of the RSA cipher (the "R" of RSA). The reference to Whit is Whitfield Diffie, the inventor of the Diffie-Hellman cipher. In Diffie-Hellman there is only one secret value so there is no equivalent of two keys sharing a common prime.

This implies that RSA, as a cipher system, has a significant vulnerability. However, I beg to differ...

The first thing to understand is that when doing cryptographic systems, if you need a "random" number, it better well be truly random. If not, then you are swimming in the part of the Ocean that on maps is labeled "Here be Monsters!" The RSA keys that shared a prime could only come about if the random number generator that was used was really well, not quite so random!

That said, we can reduce this vulnerability by a simple trick. Most RSA systems generate two random primes, P and Q by seeding a pseudo random number generator with "real" randomness. The problem is that more "real" randomness is inserted between the generation of P and Q. So it is possible to have a system with a poor random number source (low entropy) generate the same P as other systems in a similar situation. But because extra "real" randomness is injected before Q is computed, a different Q may be selected.

The fix for this is quite simple. Instead of generating randomness to seed P and later Q, generate enough randomness in one shot for both P and Q. In this fashion if two systems have a flawed random number generator such that they generate two different values of P and Q, they are most likely in this case to generate the same P and Q. In other words we are attempting to make it difficult to generate the same P and a different Q. To reduce the risk that the random number generator is flawed such that the first bits generated are the same and later bits different, all of the bits generated should be "mixed" (probably using a good cryptographic hash) and then used to generate the two primes.


From: Michael Grant

Hi Jeffrey,

Do you know by what software (using which random source) most of these flawed keys where generated with?  I'm wondering if the underlying problem is say since a majority of the web servers out there are apache running on some variety of Linux and therefore people generated the key using openssl that the problem may stem from some rpm/package that's not set up well a decent random source.

It would be interesting to put together a site where people could come along and drop in their public key and have it tell them how secure that key was based on whether there are duplicates found in the database.

Michael Grant

From: Brian C. Wells

@Michael: The paper mentions that the authors considered setting up just that kind of website, but decided not to because any malicious user could put OTHER people's public keys into the site to find vulnerable keys, without having to do all the work of collecting and analyzing the database themselves.  It's a hairy mess, but I'm sure security crackers will duplicate this effort themselves eventually (it reminds me of Rainbow Tables), so developers will have to figure something out eventually.  Maybe public key servers, before they accept a key, could check for common factors with all the other keys it knows?  And then refuse to accept the key if one is found?

Jeff, on the main topic, I'd like to say thanks for explaining!  I didn't quite understand why the paper was saying more randomness was bad, until I read your post.  Counter-intuitive, but that's true of a lot of crypto stuff I guess (not that I'm an expert, by any means).  Again, thanks.

Copyright © 2009-2023 Jeffrey I. Schiller