Beefy Boxes and Bandwidth Generously Provided by pair Networks
Just another Perl shrine
 
PerlMonks  

Re^2: Dynamically Updating Frequency Analysis

by Limbic~Region (Chancellor)
on May 07, 2013 at 15:47 UTC ( #1032497=note: print w/ replies, xml ) Need Help??


in reply to Re: Dynamically Updating Frequency Analysis
in thread Dynamically Updating Frequency Analysis

Voronich,
I am afraid I don't have the first clue how to apply what you said. To make sure we are speaking the same language, I believe your use of 'n-gram' is the same as my use of tuple. If you mean something else, please let me know. Assuming that is correct, then he is already that far along and a hand wavvy then worry about building the tree doesn't help.

Perhaps you could explain what you mean using a very short example:

Where do you want to go? Over there. Then go already.

Cheers - L~R


Comment on Re^2: Dynamically Updating Frequency Analysis
Download Code
Re^3: Dynamically Updating Frequency Analysis
by Voronich (Hermit) on May 08, 2013 at 13:54 UTC

    Note: This almost certainly means that I didn't understand the problem. I wrote a reply yesterday and seem to have lost it in the aether.

    Looking at it again with sleep and caffeine I realize I'm missing something. I was talking in terms of building the initial tree and read right past your point mention of filling in gaps, which I didn't really understand.

    I don't understand why there's a 'changing frequency list'. For a single body of text there's only one frequency list, no?

      Voronich,
      It is important to keep in mind that this is an idea coming from someone on my team completely on their own - with no knowledge of various different compression techniques. I am simply trying to find ways to help them improve by building on their existing ideas (otherwise, I would just have them study LZW).

      Let's say we have a file that only contains lowercase letters (a-z) and space and newline. This means that we have 28 symbols. We choose to encode each symbol in 5 bits. This gives us a compression of 3/8. Now setting aside variable length encodings (Huffman, LZW, etc) - can we improve the compression?

      Well, 2^5 = 32 and we are only using 28 so can we use the 4 left over sequences for anything? I know, let's examine the frequency of N-character sequences and pick the 4 that give me the greatest reduction in the over all file size (set aside that there will need to be a dictionary that explains how to expand the extra 4 sequences and a way to identify that the dictionary has ended and the 5 bit file has began). For memory reasons, we determine that N can't be arbitrarily long - we can only go up to sequences of 4 characters. We then have a lookup table of every 2, 3 and 4 character sequence with the corresponding frequency count.

      Assuming you have followed me this far, I can now explain the problem. Let's say the sequence 'here' appears 40 times (160 bytes in the input) and will have the greatest reduction in the output (25 bytes setting aside the dictionary). We go ahead and make that substitution. The second we decide to pick that one, the frequencies of some others now must be altered because they shared substrings of what we just replaced.

      Does that make sense?

      Cheers - L~R

        AH! Yes. I see what you mean now. That's actually a fun problem.

        My first "real world" answer would be to accept the inefficiency of the degrading accuracy of the table. But that's no fun.

        If you did a true multi pass you'd run in to the problem of your universe of 8-bit symbols jumping by virtue of the previous encoding pass. So that's no fun either.

        It looks to me like "given lock in to the initial approach" you'd have to take the optimum compression candidate, decrement the individual cases of its components from the frequency distribution table, including bounding characters (the spaces in the 2-grams of ' here ', for instance) for each substitution instance of 'here'. That would leave you with an accurate frequency distribution, having pulled the 40 instances of 'he', 'er', 're', ' h' and 'e ' (with the foolish assumption that 'here' always occurs in the middle of a sentence.) So the table is accurate again.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1032497]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others avoiding work at the Monastery: (14)
As of 2014-07-22 11:34 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    My favorite superfluous repetitious redundant duplicative phrase is:









    Results (110 votes), past polls