Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW
 
PerlMonks  

Your Favorite Heroic Perl Story

by friedo (Prior)
on Jan 21, 2005 at 19:37 UTC ( [id://424112]=perlmeditation: print w/replies, xml ) Need Help??

Once upon a time, about three years ago, I was writing a script to generate a large set of flat data files from a set of databases. The script was pretty straightforward: Fetch some data, munge it, write out the files into a series of directories. The directory tree would have been three levels deep with 16 nodes at each level and 16 files in each directory at the bottom level. In other words, about 65,536 files. I ran my program, but it had a crucial bug: It was not changing back to the root directory of the tree on each iteration! By the time I realized what had gone wrong and stopped the program, I had a directory structure several thousand levels deep.

Naturally, I first attempted to rm -rf it. That didn't work; rm complained that the structure was too deep and couldn't be traversed recursively. (I have since forgotten the exact error message and I didn't particularly feel like re-creating the situation.)

I tried piping find to xargs, but even that wouldn't work because rm would not delete a non-empty directory and find would fail before reaching the bottom. I tried any number of other solutions suggested by people on IRC and random friends over IM, without any luck.

I called over the several BSD and shell experts at my office and sheepishly admitted what I had done, and asked their advice. One said the filesystem was hosed and I would have to reinstall the system. (Not an ideal solution as this was a large server used by many.) Another tried a few shell tricks but also couldn't fix it.

I finally decided that if Perl could create this mess, Perl must be able to fix it. I wrote a simple Perl version of rm -rf -- traverse the directories recursively and unlink the files at the end of the tree. I ran it and, lo and behold, it worked. After just a couple minutes, my bottomless directory tree of hell was gone. Perl didn't complain at all. Perl didn't even flinch.

Perl is cool.

Replies are listed 'Best First'.
Re: Your Favorite Heroic Perl Story
by redlemon (Hermit) on Jan 21, 2005 at 22:05 UTC

    My company was running a horrible GUI based helpdesk ticketing system. On windows. Me being a UNIX sysadmin, working behind a SUN workstation, I hated that. It meant that I had to run windows along side on a laptop and couldn't cut&paste solutions, etc (this was before VNC).

    I decided to see what I could learn from tcpdumping its traffic. It turned out it got it's GUI layout from a database and that all the business rules were in that database as well. Account name and password were either unencrypted or easily guessed, so I logged in using tora and analyzed the schema. Before long I had a perl shell running, that I could use to handle my tickets with. All was well with the world.

    Building in macros came naturally and was the first time saver. Single word commands that used a template to fill in the ticket response form and flagged the ticket as closed.

    All tickets were logged for one queue "helpdesk" and up to then everybody just went through the queue and picked out their own. Some PHB decided that it would be better to have a dedicated queue slave who's task it would be to monitor the queue and assign tickets to engineers. Of course there was no headcount available so it was decided we all had to take turns. That was very time consuming and mind-fryingly dull.

    I thought of using regex rules to read and score the question forms and to use those to assign the tickets. So I built in a teaching mode where the shell would look at my choices and ask what keywords I looked for in the problem. And in a couple of weeks it had developed it's rule based brain and was automatically distributing about 30% of all tickets during my shift.

    I gloatingly showed it off to my poor colleagues who were still manually distributing tickets. They went: "hey, can I use that?". So before long the shell was running 24x7 as a virtual queue monitor. That was the second time saver.

    Still later I adopted the same rule based engine to answer trivial questions of which it turned out there were quite a few, especially for the windows helpdesk.

    I still remember isis (as the shell was called, after the ticketing system) fondly, especially since the ticketing system has been replaced with an even more horrible java based web interface, that I just can't hack into.

Re: Your Favorite Heroic Perl Story
by holli (Abbot) on Jan 22, 2005 at 07:42 UTC
    the set:
    I was not long at my new job when i wrote some little tools for my personal ease, when my boss looked over my shoulder.

    >> What is this?
    << That is perl. It is a language...
    >> I know what is is, we donīt use that, itīs whacky.
    >> But,
    << No but. We donīt use it!
    >> Ah, ok. .oO(a*****e)Oo.

    Two or three weeks later, i looked over the bosses shoulder while he tried to write a filter for a textfile (in VB) with heavy use of index(), substring() and ugly long if/else constructions. It ended up with 70 lines of code and three hours work.
    I let him do that and in the meantime i prepared a one-liner in the
    perl -n -e "print if //" infile>outfile
    style (one line, 2 minutes).

    When i showed it to him he was first angry and then amazed. And since that day we use perl a lot and i am the companies "chief of perl".

    holli, regexed monk

      I got into perl by writing a tool that would parse and extend VB classes with DB interaction (automatically generate a DB schema to make the classes persistent as well as adding persistancy methods to the code). I have never really done any signifigant VB after that project, its all been perl since then pretty much. :-)

      ---
      demerphq

      So, all you did was to reinvent grep? Because that's what a perl -ne 'print if //' is.
        Since this seems to be a Windows shop, they might not have grep. Plus, he might have preferred to use Perl for some reason. No need to be so judgmental.
Re: Your Favorite Heroic Perl Story
by halley (Prior) on Jan 21, 2005 at 19:52 UTC
    Key lessons learned?
    • don't write code on a production server?
    • don't run code for the first time on a production server?
    • don't do anything unfamiliar on a production server?
    • always mount a scratch monkey
    • grab the source code to rm and find out why it gave an error message
      instead of doing the same thing your script could do

    --
    [ e d @ h a l l e y . c c ]

      Well, rm should just work and shouldn't have to be debugged. On my system, for example, rm -rf * didn't complain.
      It was not a production server, it was a development server used by many developers.

        I will not cut you any slack for that.

        Any server used by many users is, in fact, a production server. If you were just going to mess up your own area on the development server, you wouldn't have felt it was important to mention the other users.

        Any outage will inconvenience more than one service customer. In your case, the customers are developers, so they both understand that sometimes things get broken, and that you should have known better.

        Your co-workers should not give you any slack for that, either.

        --
        [ e d @ h a l l e y . c c ]

Re: Your Favorite Heroic Perl Story
by hardburn (Abbot) on Jan 22, 2005 at 16:25 UTC

    A while back, we were starting a project to redesign our websites. Before we could do that, though, we had to map out our current site to know what was there. Our site had been built up over the years into a Big Ball of Mud, so this was going to be a large project in itself.

    I thought about this a little while and was intrigued with the possibility that Perl might be able to map the whole thing out for us. I spent my own time on the weekend coming up with a spider that would go across our domains, grabbing links, and dump the final data in YAML (I released this code to the public).

    Next step was to write programs that would take that YAML file and do intresting things with it. The first thing was to check for broken links. It took about a half hour to write a program that would go through the YAML and get all requests that returned 404s. I sent the resulting list to our web master (which was about 7 pages long, IIRC), who had our web site 404-free by the end of the week.

    One of the objectives for our redesign was to make no page more than 3 links deep from the home page. So my next program was to get the shortest path from page A to page B. This is essentially a problem of applied graph theory. Much of the available information on this topic is written for pathfinding in games, but the algorithms used there are applicable to general graphs (such as Dijkstra's algorithm and A*).

    Although I've found many different uses for this, I've yet to use it for its orginal purpose of mapping our web site (we ended up making a simple map manually and going off that). I was hoping to take each page and make it into an SVG image with lines connecting linked pages. Exact layout could be done by an SVG graphics program. Alas, I never had the time to do this.

    "There is no shame in being self-taught, only in not trying to learn in the first place." -- Atrus, Myst: The Book of D'ni.

      Look into GraphViz at some point; most of the work you'd have to do is already done.

      Makeshifts last the longest.

Re: Your Favorite Heroic Perl Story
by jimbojones (Friar) on Jan 22, 2005 at 23:56 UTC
    This may not be 'heroics', more of a Perl success story.

    I did my post-graduate work at a large particle accelerator laboratory outside of Chicago. We were looking for a very small asymmetry in the number of a certain decay mode of a certain type of particle versus its anti-particle. These decay events are very rare, and we needed a few million of them to see the asymmetry. The background number of decays was in the order of trillions of events.

    To sort through the decays, we built a three-level "trigger" to see the events in the detector. The first two levels, which cut out 99% of the decays, were hardware-based. The "Level 3 trigger" did software event reconstruction and pattern matching on the decays, and tagged the event as a possible decay of interest. Those decay events were written to tape.

    Now, not all the events written to tape were candidates for the asymmetry measurement. Some were for other physics modes, some were for calibration of the detector, etc. However, they were all written to tape in the order that they occurred, and are interspersed on the raw data tapes. To do the data analysis, as different groups wanted to study different samples, we had to do an "offline split" of the tapes. The split was based on the information that the Level 3 trigger wrote into the event data header. The process:

    • Read 10 or so raw data tapes. Add data to disk files based on the event tag.
    • Once one tape's worth of data was on disk (for the various samples), write data to a new tape, specific to that sample.
    • Repeat for all 3000 20Gb input tapes.
    The job took about 4 months of baby-sitting.

    This process finished by 1997. Then the lab received more funding, and it was decided that we would run the experiment again in 1999 for more data. I was deep into my thesis by then, but my advisor asked if I could look at streamlining the "split" process to do it online in real time. With two post-doctoral fellows, we wrote a Perl-based caching scheme to do the split on the fly. Now the Level 3 software wrote the data event to a disk array. We had a Perl daemon that monitored the disk as the data files were being written. Once it knew we had a full tape's worth of data, it spawned a child process to ask for the scientists on shift to mount a tape, click a few buttons, and that data was sent to a tape based on its event type. One post-doc wrote the daemon, I wrote the tape writing job, and the 3rd guy handled some of the UI components. Took us about 3 weeks, mostly because we didn't know Perl at the time.

    Perl fully saved the day, with the easy filesystem access and text handling (used to parse which data files were on disk). All told, we saved about 3000 20 Gb data tapes. As I recall, they went for $20 a pop. With what they pay grad-students and post-docs, it was 10-fold return on investment. It also saved the next set of grad students 4 months waiting for an "offline" split.

      Hey Jimbo! Were you on NuTeV? I was on DONUT, out PWest way. Had to do the same thing with the FNAL tape libraries. Was one of my first big Perl projects, written in Perl 4.
        Hi,

        Nope, KTeV. Wrote it in Perl 5.

        Nice to see other physicists, or ex-physicist, out here.

        Seems like a long time ago ...

        - jim
Re: Your Favorite Heroic Perl Story
by martinvi (Monk) on Jan 24, 2005 at 07:58 UTC

    Not really heroic:

    Some years ago, management decided to introduce business transaction monitoring. After finishing a call, the phone-agent got a formular full of checkboxes, radiobuttons etc. to fill in, what the last call was about.

    That formular needs up to 16 seconds to process. Not good.

    A lot of meetings was held, a lot of buzzwords spilled, a lot of money spend. Since I'm the admin, I had to prove that my servers, operating systems etc. were not the bottleneck. Being bored by the uproar caused by non-techs, I rewrote the formular using mod_perl instead of "highly optimized, enterprise-level, patented technology". Down to 8 ms (a factor 2000) to process.

Re: Your Favorite Heroic Perl Story
by wolfger (Deacon) on Jan 21, 2005 at 19:51 UTC

    Very cool story. I'd ++ if I had any left... One question, though. rm -rf? On my system it has to be rm -Rf

    My own story is far less dramatic. I simply have problems with my video players. Mplayer, and Ogle (if I start it from the menu rather than from a terminal) never die cleanly. Sometimes, by the end of the day, I have 20+ processes running due to one or the other program. I whipped up a quick one-liner to seek and destroy them all.

    I'm looking forward to hearing other people's stories.


    --
    Linux, sci-fi, and Nat Torkington, all at Penguicon 3.0
    perl -e 'print(map(chr,(0x4a,0x41,0x50,0x48,0xa)))'

      man pkill?

      Makeshifts last the longest.

        Oh, sure... Just make my perl skills obsolete. ;-)

        Thanks for the tip. I never heard of pkill before.


        --
        Linux, sci-fi, and Nat Torkington, all at Penguicon 3.0
        perl -e 'print(map(chr,(0x4a,0x41,0x50,0x48,0xa)))'
      # uname SunOS # rm usage: rm [-fiRr] file ...
      cheers!
Re: Your Favorite Heroic Perl Story
by tcf03 (Deacon) on Jan 21, 2005 at 20:43 UTC
    Mine is not as dramatic either. A company I used to work for uses a popular piece of patch managment software that has poor reporting facilities. I wrote some perl scripts that query the backend database which in the end saved a lot of time. For what its worth, my boss was impressed with perl. This opened the door to me being able to use perl in more production environments by a company that for the most part shuns open source technology.
Re: Your Favorite Heroic Perl Story
by PhilHibbs (Hermit) on Jan 24, 2005 at 17:38 UTC
    On my last project, we were using a product that generated code consisting of between five and fifty files, FTP'd them to a mainframe, and then (theoretically) compiled and ran them. The FTP connection was as flaky as the east coast, and kept disconnecting at random points. With larger jobs, you could literally sit there clicking Send for an hour without ever getting all the files to transfer.

    I wrote a script that would decipher the control file that determines what files get sent where, open the FTP connection, and send the files. If any error (other than password rejects) occurred, it would disconnect, reconnect, and carry on from the file that had failed. No more "fail on the fifth file, retry, fail on the second file", etc.

    The script, the first version of thich was less than 50 lines long and took under two hours to write, saved literally thousands of hours of developer time, which would have killed the project. It also significantly improved Perl's reputation in the team. (Perhaps more honestly, if I hadn't written this script, something else would have probably been done about the FTP connection, but we would have suffered for a couple more months at least).

    Later on in the project, I upgraded the script to send from different environments, to send multiple sets of source code, generate composite compilation jobs, scan for lines longer than 72 characters, pull out all the embedded SQL statements into files for DBA optimisation, and a few other time-savers.

    Historical note: at the time this comment was posted, the east coast of the US was in the middle of a rather severe blizzard, hence the "flaky" quip.

Re: Your Favorite Heroic Perl Story
by cosimo (Hermit) on Jan 25, 2005 at 11:56 UTC
    Once upon a time, about 4/5 years ago, I was given a corrupted Ms excel file by a desperate girl who had just lost months of work.

    I tried anything to resurrect that file, even opening it with newer or older excel versions, changing random bytes in it (!), replacing parts of it from a good xls file. None of it worked.

    By that time, being a good guy :-), I already knew that Spreadsheet::ParseExcel and Spreadsheet::WriteExcel were on CPAN. Probably I could use those modules to try parsing the corrupted xls and writing it back to a new file.

    That was what I did, and with a five lines perl script I saved that person's work.

Re: Your Favorite Heroic Perl Story
by legato (Monk) on Jan 25, 2005 at 15:57 UTC

    I work in a primarly C# and VB shop. Another group needed a tool to alter log files, using a server name buried inside a line to query a DB for the user responsible for that server, and append that to the line.

    The devel team lead rejected the project on the basis that the 16 man-hours required to create and test the app in C# were not cost justifiable. I suggested I be allowed to spend an hour seeing if it could be done more cheaply, and it was approved. I spent 15 minutes writing a script using DBI and DBD::ADO and a regex. It took me 5 minutes to test completely. I then played FFT-A for the remaining 40 minutes. ;-)

    Anima Legato
    .oO all things connect through the motion of the mind

Re: Your Favorite Heroic Perl Story
by jynx (Priest) on Jan 25, 2005 at 23:30 UTC

    I was working at a library, and we had this wacky book accounting tool that was propietary software from the company that maintained our central computer system for us. The person who handled the software wanted an easier interface and a way to catch mistakes before sending off the accounting to the billing department, but the company was unwilling to give that away for less than quite a few grand.

    That is, the handler wanted a glue tool that munged strings of text into a practical report. I had never used Perl, but i knew that's what it did best and i grabbed at the opportunity to learn a new language.

    So for a week i spent my spare time at work hacking out a perl script that did those things. People blamed me, but it was really Perl that saved them thousands of dollars...

    jynx

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlmeditation [id://424112]
Approved by wolfger
Front-paged by wolfger
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others drinking their drinks and smoking their pipes about the Monastery: (4)
As of 2024-03-19 05:41 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found