The code itself isn't extraordinary, but the task was quite large: a company I used to work for had a site written back in 1995, good ol' HTML 1.0. All the navigational headers, footers, and links were hard coded into well over 1000 pages; unfortunately, all of which were hand produced at different times by different people. They all looked the same, but the HTML behind the scenes was veeeery different at times. Enter perl, center stage: after capturing a recursive directory search of all html and shtml content, I wrote a script to open each listed file, scan for a variety of conditions within each (usually ill-formatted) page and make the necessary adjustments to a standardized format now using SSI to greatly reduce the headache. Less than 200 lines of code and a less than 1 minute run time. It missed a few of the pages that were *really* bad on the inside, but it also reported those to me, which I could take care of on my own. Major time saver.
|