http://www.perlmonks.org?node_id=465344


in reply to Imploding URLs

You could use String::Ediff to find common substrings between pairs of URLs, and then break those down into pieces that are 31 characters or less and count those with a hash.

It uses a suffix tree to find the substrings, so it should be fairly efficient. Out-of-the-box, it finds substrings of length >=4, but that could probably be changed. Substrings of length one would not be compressed, anyway.

Update: You might prefer Algorithm::Diff, which has a nicer interface and more options.

Update2: The node Re: finding longest common substring also builds a suffix tree and it might be adaptable to your problem.