As far as Blowfish is concerned, each call to encrypt a block of 128 bits (if I recall the size correctly) 64 bits is a different plaintext on the same key. You call it as many times as needed for the length of your message. What's the difference between calling it once for a 2K message or twice for two different 1K messages?

In terms of breaking the code mathematically, it's the total number of blocks that matters, not whether their logically one or n email notes. So, ongoing use like SSL/TLS will switch keys every so often. How many is too many? It's really only an issue with DES and variations because the block size is 64 bits.

With AES and other modern 256-bit block ciphers, it's not an issue. For 128-bit blocks, I don't know what the size is but it's probably more than you have to worry about for modern applications.

Update: With AES and other finalists, having 128-bit blocks means the size safety limit is more than a typical application needs to worry about. If that's not enough, many of those are defined with larger block sizes (up to 256 bit), rendering that kind of attack a total non-issue.

There is a difference between 1 large vs. 2 small messages, for a different kind of attack. If you know that the messages begin with the same stuff (e.g. the TO: headers) you might be able to make use of that. You could, for example, tell that the first n blocks of the two emails were the same, indicating they might be to the same person. However, the use of an "initialization vector" (iv) will prevent this problem, and two 1K messages is no different than 1 2K message. So I say the only thing missing from your example is a different random iv. Note that if you use the last output block of one message as the iv for the next, the result of concatenating the two ciphertexts is literally NO DIFFERENT than concatenating the two plaintexts together and encoding as one CBC sweep. Take that as a proof of the principle stated above.

Now RC4 is a stream cipher, as opposed to a block cipher. It's totally different. If you encode two messages using RC4 with the same key, then someone can XOR the two ciphertexts and the key cancells out! He's left with the same result as XORing the two plaintexts together, and untangling that is not nearly as hard as breaking the cipher.

—John

Update: was confusing block size values with key size values.

• Comment on Re: Safe symmetric encryption - Crypt::CBC + Crypt::Blowfish?

Replies are listed 'Best First'.
Re: Re: Safe symmetric encryption - Crypt::CBC + Crypt::Blowfish?
by no_slogan (Deacon) on Feb 08, 2003 at 18:32 UTC

I'd just like to add a few things to John's excellent post.

An amplification: Changing keys regularly is always good cryptographic practice. It reduces the amount of data you lose from a single compromised key. It's only with some ciphers (like RC4) that changing the key becomes critical.

In your mail application, you'll definitely want it to be possible to change keys without a major hassle. Maybe you'll want to use per-message session keys, each encrypted with a (changeable!) master key. Maybe using the same key for everything, and changing it once a month, is enough. Sorry we're all giving the same non-answer, but it depends on your requirements.

Yes, random IV's are good. By default, Crypt::CBC uses them.

A minor correction: AES uses 128-bit blocks, not 256-bit. Rijndael goes up to 256-bit blocks, but the AES specification doesn't include that. I can't think of any other cipher that uses such large blocks, and there's really not much reason for them. A birthday attack against a 64-bit block cipher (like DES or Blowfish) in a chaining mode is going to need around 30 gigabytes of encrypted data before you expect to see the same block twice. With 128-bit blocks, that goes up to 2**68 bytes, or sixty thousand years at gigabit ethernet speed.