Note: This almost certainly means that I didn't understand the problem. I wrote a reply yesterday and seem to have lost it in the aether. Looking at it again with sleep and caffeine I realize I'm missing something. I was talking in terms of building the initial tree and read right past your point mention of filling in gaps, which I didn't really understand. I don't understand why there's a 'changing frequency list'. For a single body of text there's only one frequency list, no?
| [reply] |

Voronich,
It is important to keep in mind that this is an idea coming from someone on my team completely on their own - with no knowledge of various different compression techniques. I am simply trying to find ways to help them improve by building on their existing ideas (otherwise, I would just have them study LZW).
Let's say we have a file that only contains lowercase letters (a-z) and space and newline. This means that we have 28 symbols. We choose to encode each symbol in 5 bits. This gives us a compression of 3/8. Now setting aside variable length encodings (Huffman, LZW, etc) - can we improve the compression?
Well, 2^5 = 32 and we are only using 28 so can we use the 4 left over sequences for anything? I know, let's examine the frequency of N-character sequences and pick the 4 that give me the greatest reduction in the over all file size (set aside that there will need to be a dictionary that explains how to expand the extra 4 sequences and a way to identify that the dictionary has ended and the 5 bit file has began). For memory reasons, we determine that N can't be arbitrarily long - we can only go up to sequences of 4 characters. We then have a lookup table of every 2, 3 and 4 character sequence with the corresponding frequency count.
Assuming you have followed me this far, I can now explain the problem. Let's say the sequence 'here' appears 40 times (160 bytes in the input) and will have the greatest reduction in the output (25 bytes setting aside the dictionary). We go ahead and make that substitution. The second we decide to pick that one, the frequencies of some others now must be altered because they shared substrings of what we just replaced.
Does that make sense?
| [reply] |

AH! Yes. I see what you mean now. That's actually a fun problem. My first "real world" answer would be to accept the inefficiency of the degrading accuracy of the table. But that's no fun. If you did a true multi pass you'd run in to the problem of your universe of 8-bit symbols jumping by virtue of the previous encoding pass. So that's no fun either. It looks to me like "given lock in to the initial approach" you'd have to take the optimum compression candidate, decrement the individual cases of its components from the frequency distribution table, including bounding characters (the spaces in the 2-grams of ' here ', for instance) for each substitution instance of 'here'. That would leave you with an accurate frequency distribution, having pulled the 40 instances of 'he', 'er', 're', ' h' and 'e ' (with the foolish assumption that 'here' always occurs in the middle of a sentence.) So the table is accurate again.
| [reply] |