http://www.perlmonks.org?node_id=1032604


in reply to Dynamically Updating Frequency Analysis

This morning I was discussing memories of Byte, and how it was far superior to anything else then or since. And how the 2011 "re-launch" completely missed the essence of what made Byte stand out from the crowd.

One of the articles that came back to mind was a very detailed discussion of the LZW algorithm. And that struck a cord with the stuff you've been talking about. Just a thought.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
  • Comment on Re: Dynamically Updating Frequency Analysis

Replies are listed 'Best First'.
Re^2: Dynamically Updating Frequency Analysis
by Limbic~Region (Chancellor) on May 08, 2013 at 17:14 UTC
    BrowserUk,
    I don't specifically remember Byte but I do remember the like. I have fond and frustrating memories of painfully entering programs into the computer, saving them and attempting to run them only to get a cryptic errors only to start over. Sometimes a subsequent issue would have a correction.

    I like LZW and the adaptive variations that have evolved from it. At some point when this is all over, I will walk the team through Huffman Coding and LZW as well as some others. If you haven't seen my response here , it may be worth a read.

    Cheers - L~R

      If you haven't seen my response here , it may be worth a read.

      Interesting. What your guy seems to trying to do reminds of 3 months wasted work a few years ago.

      The specification for a proprietary transmission protocol (over RS232) called for each block to carry a checksum. But the bright spark that specified it decided tha the checksum should be calculated such that it included its own bytes. Thus packet:

      # payload checksum xx xx xx xx xx xx xx xx xx xx ... xx xx xx cc cc cc cc

      The idea was that when verifying the checksum upon receipt, the entire packet (including checksum) could be passed to the checksum algorithm and the result compared against the unsigned integer in the last four bytes.

      Problem: The very thing that makes cryptographic digests of plain text messages work -- the insane combinatorial explosion of work required to find the appropriate padding to make the fake message match the digest -- makes this almost impossible to calculate.

      I suspect your guys problem suffers similarly.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        BrowserUk,
        The closest I have had to deal to that problem is where the first few bytes of a message told how long the message was but had to include itself in the length calculation. For illustration purposes, lets say it would just be the number of bytes followed by a capital X followed by the payload.

        If the payload was 98 bytes and we add the letter X we get 99 so we start out by saying '99X..' but by adding 99 to the message we added two more bytes which made it '101X..' but because 101 is longer than 99 by 1 byte, the final value was 102X. This was a simple problem to solve though I never did understand why the length had to include itself.

        Getting back to the OP, here is the approach I have been playing with in my head. There will be 96 characters (Unix newline plus ASCII 32 - ASCII 126). This means a 1/8 reduction simply by going to 7 bit characters and leaves room for 32 multi-character tuples.

        Rather than keeping track of each tuple separately, keep track of data structures of size 3 (originally I thought 4 could fit into memory but I am not so sure). Single characters are ignored because they will be encoded using the 7 bit method.

        HER (count 137) HE ER Points To ERE (count 100) ERM (count 27) RED (count 18) Pointed From IHE (count 11) HEH (count 2)

        Once the data structure is created, get the frequency of all the 2 character tuples (requires traversing the data structure). With this data in hand, use a heuristic algorithm to pick an approach. You could limit how long you look based on time or number of iterations or what not. Pick the tuple (either 2 or 3) that saves the most space. There should be enough data in the tree to measure the impact of every choice and re-calculate frequencies. Once 32 have been chosen - remember the choices and the total size reduction and restart by picking the 2nd highest choice first.

        Does this make sense? It does in my head but I have been struggling with articulating ideas lately.

        Cheers - L~R