in reply to Dynamically Updating Frequency Analysis

Oh, that's the XYZ problem. Tt has been proven to be NP complete but the ABC heuristic approach is likely of value.

here to help o/

Alright alright. CB smartassery aside, this reminds me of the natural language text indexing projects I embarked upon when I was a puppy. It sounds like he's going too far at once. Why not emit the n-grams (for n 1-4) into a frequency distribution hash, then worry about building the tree out of it.

  • Comment on Re: Dynamically Updating Frequency Analysis

Replies are listed 'Best First'.
Re^2: Dynamically Updating Frequency Analysis
by Limbic~Region (Chancellor) on May 07, 2013 at 15:47 UTC
    I am afraid I don't have the first clue how to apply what you said. To make sure we are speaking the same language, I believe your use of 'n-gram' is the same as my use of tuple. If you mean something else, please let me know. Assuming that is correct, then he is already that far along and a hand wavvy then worry about building the tree doesn't help.

    Perhaps you could explain what you mean using a very short example:

    Where do you want to go? Over there. Then go already.

    Cheers - L~R

      Note: This almost certainly means that I didn't understand the problem. I wrote a reply yesterday and seem to have lost it in the aether.

      Looking at it again with sleep and caffeine I realize I'm missing something. I was talking in terms of building the initial tree and read right past your point mention of filling in gaps, which I didn't really understand.

      I don't understand why there's a 'changing frequency list'. For a single body of text there's only one frequency list, no?

        It is important to keep in mind that this is an idea coming from someone on my team completely on their own - with no knowledge of various different compression techniques. I am simply trying to find ways to help them improve by building on their existing ideas (otherwise, I would just have them study LZW).

        Let's say we have a file that only contains lowercase letters (a-z) and space and newline. This means that we have 28 symbols. We choose to encode each symbol in 5 bits. This gives us a compression of 3/8. Now setting aside variable length encodings (Huffman, LZW, etc) - can we improve the compression?

        Well, 2^5 = 32 and we are only using 28 so can we use the 4 left over sequences for anything? I know, let's examine the frequency of N-character sequences and pick the 4 that give me the greatest reduction in the over all file size (set aside that there will need to be a dictionary that explains how to expand the extra 4 sequences and a way to identify that the dictionary has ended and the 5 bit file has began). For memory reasons, we determine that N can't be arbitrarily long - we can only go up to sequences of 4 characters. We then have a lookup table of every 2, 3 and 4 character sequence with the corresponding frequency count.

        Assuming you have followed me this far, I can now explain the problem. Let's say the sequence 'here' appears 40 times (160 bytes in the input) and will have the greatest reduction in the output (25 bytes setting aside the dictionary). We go ahead and make that substitution. The second we decide to pick that one, the frequencies of some others now must be altered because they shared substrings of what we just replaced.

        Does that make sense?

        Cheers - L~R