http://www.perlmonks.org?node_id=204833

Sometimes people who have a problem to solve can do amazing things when they aren't encumbered by knowing the "right way" to go about things.

I was looking over such a piece of code last night. Someone needed a web interface for running tests cases, and put together a web server that would serve up HTML, images, and run Java test cases and return the results (in HTML). All in 61 lines of Perl. Granted, it was hardly a complete web server. It worked off of the first line of the HTTP request, and returned a minimal (but correct) response header. Some of it looked like raw socket code lifted out of the Camel book. No strict, no modules (other than Socket), and only one comment. Still, it solved someone's real problem in less than page of code.

Looking it over, I got to thinking about some of the people who wander in here with real problems to solve. We usually advise them to use strict, use CGI, and reuse some set of modules from CPAN. Instead of a minimal script that'll solve their problem, they're sent off with a shopping list of things needed to build a "correct" (i.e., heavier) solution. And often these solutions require fresh CPAN downloads to install elsewhere, which raises a real barrier in some situations.

The problem with the "right way" is that it's a slippery slope. Do I really need an XML parser if I'm given a simple chunk of XML to deal with, or are regexes sufficient? Do I really need a database for a simple content management system, or are would flat files work adequately? Do I really need to use CGI if I'm doing something simple inside the firewall? The "right way" adds weight that isn't always needed.

But many of us are inclined to do things right. We know that today's requirements are incomplete and will probably grow anyway, and that we'd better plan for the future by building solider solutions than are asked for, leveraging tested, off-the-shelf code where possible to save effort.

I suspect, though, that this leads many of us to over engineer the occasional one-time problem we encounter.

Replies are listed 'Best First'.
(jeffa) Re: Simplicity vs. Doing It Right
by jeffa (Bishop) on Oct 13, 2002 at 01:35 UTC
    You raise some very good points dws - i personally still prefer to do it "right" at the expense of building a heavier solution and over engineering the problem. Why? Because for me, 9 out of 10 times it is easier and faster for me to do so. When i first joined the Monastery, i was hesitant to search CPAN and spend time reading a module's documentation. But, after practice and experience, using modules became a lot easier, and even fun. I think they help one to focus on the mortar and not the bricks, so to speak: instead of re-inventing a wheel, just concentrate on how to use the wheel (i feel like i am preaching to the choir now ;)). This way, one can focus on the overall design - they sure helped me in this area.

    UPDATE:
    Had to run off to dinner - while i was gone i remembered an incident where i chose the 'right now' way instead of the right way. Recently i needed to parse some HTML that i wrote, instead of using a robust CPAN module, i quickly threw together:

    my @mp3 = $html =~ /([-.\w]+\.mp3)/g;
    Pretty fragile code, but i have to admit that i was able to finish my task very quickly. Of course, since i am the one who wrote the HTML that i parsing, i can get away with it. Trying to find that wonderful balance between 'being smart' and 'getting stuff done' is indeed an art. :)

    jeffa

    L-LL-L--L-LL-L--L-LL-L--
    -R--R-RR-R--R-RR-R--R-RR
    B--B--B--B--B--B--B--B--
    H---H---H---H---H---H---
    (the triplet paradiddle with high-hat)
    
Re: Simplicity vs. Doing It Right
by jryan (Vicar) on Oct 13, 2002 at 00:39 UTC
    The problem with the "right way" is that it's a slippery slope. Do I really need an XML parser if I'm given a simple chunk of XML to deal with, or are regexes sufficient?

    Personally, I'd prefer pre-built XML parser to rolling my own, especially after the "simple chunk of XML" becomes more complex.

    Do I really need a database for a simple content management system, or are would flat files work adequately?

    Well, it depends on the definition of simple. If you're doing something for your own personal site, sure, why not. I didn't even use a database for mine, I used a "nested-template" based solution that I whipped up myself.

    Do I really need to use CGI if I'm doing something simple inside the firewall? The "right way" adds weight that isn't always needed.

    Yes. Why do work that you don't need to? It takes me about 12 and a half seconds to type:

    use CGI; my $q = CGI->new; my $param = $q->param("param");

    And the best part is, I don't even need to think about it. And I know it won't break if I ever have to use it outside the firewall. Why would you roll your own version of this well rounded wheel, especially if you know better?

    I guess what I'm really trying to say is that it really depends on the situation. Sure, you're friend's mini web server worked great, and theres certainly nothing wrong with a piece of code that does exactly what it is supposed to do to solve the problem at hand, even if it is incomplete in the broad sense.

    However, when people come here for help, it is a different situation. I most certainly expect that should recieve the most complete, accurate information on the "right way" to solve the problem. Why should they get anything less?

      Why do work that you don't need to? It takes me about 12 and a half seconds to type: use CGI; ...

      Sure. After you've used CGI a few times it only takes a few seconds. But I'll wager the first time took a lot longer. It took me hours, because I wanted to understand what was going on under the covers. Ditto for the XML:: stuff. The first time can be a killer. If the only advice we give people is to use CGI, XML::*, etc., then we might be short-changing them. It might be the "right" way, but is it the only way? Heck no. Applying a regex to $ENV{'QUERY_STRING'} works just fine for some one-off problems. Do I do that anymore? No. But in some cases, it's O.K. to send people away with that as an option.

        Sure, it can take hours the first time you use a module if you go it alone, especially if you read what is happening under the hood first (I often do this too). However, this isn't the case when you read: "You need to use module X, just like this: *short code snippet follows*." Why send people off with a solution that you know will cause them more work than necessary, especially when the "right way" is so much easier to implement?
Re: Simplicity vs. Doing It Right
by jplindstrom (Monsignor) on Oct 13, 2002 at 09:45 UTC
    Using a module is not only about reusing code. It's also reusing knowledge about the problem domain.

    This is good, because if you don't know how to solve the problem in the first place, how do you know what you need to know to not do something stupid? Well, you don't. Sometimes that matters, sometimes it doesn't. But as time passes, things change, and it usually ends up being imortant because things never stay as simple as they seemed at first glance.

    Using a module is also reusing experience. A module probably (hopefully :) has time in production, which means someone else already encountered the first real-life, non-standard-compliant, oh-that's-how-we-really-do-things-here practical problems and solved them.

    I see this all the time with co-workers doing stuff in C or C++. Like writing your own mail parser (for a mail-to-db gateway). One program breaks constantly (ok, now, a year later it's a bit more stable after n fixes) because all of a sudden a weird mail format makes its way to the mailbox, or we switched mail server and the program doesn't know how to POP anymore because it made some unnessecary assumption about something.

    And all the time I think to myself "find a library", "I know a module that does that", and "don't implement that yourself, it's already been done!"

    The problem with other languages that we don't have with Perl isn't that there are no libraries available. It's that a) people can't find them, and b) people can't try them before buying them, so they don't try them.

    /J

    Update:Typo.

      Using a module is not only about reusing code. It's also reusing knowledge about the problem domain.

      I agree entirely, but let's approach this from the perspective of someone who needs to understand a thin slice of a problem domain in order to solve a problem, and is handed a handful of standard modules that cover a superset of their problem. If they take the code snippets as offered, they risk being on the slippery slope towards Cargo Cultism. If they take the time to grok the modules, they come out better in one way (knowledge and skill), but worse in another (the opportunity cost of the time taken to understand the modules).

      The example that started this off is an interesting one to think through. Someone needed to run unit tests through a browser. They decided to write a small web server. (Perhaps they'd seen another example where this approach had been successful. Perhaps they were in an environment that precluded their installing Apache or IIS. We don't know.)

      The slice of the problem that they needed to solve was

      • Use sockets to respond to HTTP requests
      • Ignore requests that didn't originate from localhost
      • Parse just enough of an HTTP request to figure out what to do
      • Return static HTML or images, or run a Java program that emits HTML, and return the results
      • Return a minimally correct content header with the response

      This they were able to cobble together in 61 lines. Not pretty, but workable.

      If they'd come to me for help, I would have suggested starting with HTTP::Daemon. To use HTTP::Daemon righteously, you need to understand HTTP::Request, HTTP::Response, and HTTP::Status. Status is trivial, but to understand the first two, you need to add HTTP::Message to the mix. It's a common superclass, which adds additional complexity weight. HTTP::Message uses HTTP::Headers, which pulls in URI::URL, HTTP::Date and MIME::Base64. URI::URL subclasses URI::WithBase, with overrides stuff in UNIVERSAL (heads are spinning at this point.)

      And I'm leaving stuff out. By suggesting HTTP::Daemon, I've sent someone away with a pretty hefty transitive closure of stuff to read through and understand, if only to know whether or not it applies to their problem.

      Are they better off? Maybe yes, maybe no.

        I think your initial list is an oversimplification. Either the coder in question takes the time and energy to read and to understand the relevant HTTP RFC, or he copies code from somewhere else. Which is more Cargo Cultish? Maybe it would be simpler (for some value of simple) to post this message by telnetting to port 80, but I'm using a web browser.

        Update: Fixed a punctuation bug.

Re: Simplicity vs. Doing It Right
by dvergin (Monsignor) on Oct 13, 2002 at 04:03 UTC
    Yup! I can think of several "dirty little secrets" in the project I am currently responsible for. Quick-hack data extraction from some simple XML generated in predictable ways only within the current suite comes quickly to mind.

    Which brings me to the matter factoring these little nuggets. As often as I can, when I commit one of these "quick solutions," I wrap it in such a way that later, when things get more complicated, it will be easy to exchange the "adequate hack" for some serious code.

    The slippery slope of quick hacks embedded in code that has a future is when we later find ourselves writing around the inadequacies of the quick hack instead of fixing things. I try to set up subroutines that claim to do the job robustly even though a quick glance at the innards makes it obvious that their capabilities are limited. Later when the routine stumbles under new demands, I fix that subroutine rather than adapting the calling code to the limitations of the quick hack.

    My ability to predict which well-wrapped quick hacks will need this attention in the future is far from perfect. (Which is why Extreme Programming counsels us not to spend too much effort solving problems that have not yet come up.) I have said two years later "Yee Gads! Is that functions still doing the job? It was just a placeholder." as often as I have said one month later, "I thought sure I had anticipated everything in this block!"

    And often when something breaks, there is the wonderful 'Aha!' moment... "So that's why it's safer to use the recommended module!" Those are the lessons that stick.

    I think quick hacks are fine if documented and factored properly. I would even argue (XP-ishly) that effective programming is a constant process of weighing wisely the least you can do to produce code that does the job now and lends itself well to future modification.

    ------------------------------------------------------------
    "Perl is a mess and that's good because the
    problem space is also a mess.
    " - Larry Wall

Re: Simplicity vs. Doing It Right
by LEFant (Scribe) on Oct 13, 2002 at 03:34 UTC
    One of the enduring problems of the software trade is determining what is the right way to solve the problem at hand. Often we are given too little guidance in what the client ultimately thinks is right. If you are solving your own problem I surely hope you know what is right.

    Sometimes we seem to have to coax from the client what right really means to him. Is it quick delivery of code, rigorous correctness, fast execution, parsimonious resource use, extensibility, adherence to organizational standards, reuse of existing components, etc. We can influence what is right as well. Some of the earliest work studying the work of programming determined that programmers could successfully direct their solutions to concentrate on a desired rightness metric (Gerald M Weinburg, "The Psychology of Computer Programming, 1971").

    As professionals, we are paid to do what is right. We are often given wide latitude in using our judgment to determine what is right in the context of the problem at hand and the social context of the origination. We can professionally differ with management's concept of rightness yet conform to it; we are accepting pay for doing so after all.

    It takes time to find modules and learn to apply them. It takes time to reinvent the wheel. Both approaches can build my skills as an individual. Most often the right thing to do is to spend my time using and expanding the tools available in my community.

    It is easy to waste time gilding lilies, but often both entertaining and enlightening. It can be fun to do things quick and dirty. The rub is in knowing what is "right" for the case at hand. I have been at it for decades now and still make wrong choices, hopefully a lot less frequently now.

    Bob

      The correct spelling of the author of The Psychology of Computer Programming surname is Weinberg. Thanks to dws, whose joy at doing powerful things succinctly with perl I certaily don't want to diminish. After all, that is precisely why I have adopted perl, belately, I admit.
Re: Simplicity vs. Doing It Right
by Revelation (Deacon) on Oct 13, 2002 at 04:12 UTC
    But many of us are inclined to do thing right. We know that today's requirements are incomplete and will probably grow anyway, and that we'd better plan for the future by building solider solutions than are asked for, leveraging tested, off-the-shelf code where possible to save effort.

    "The right way" to me is based more often for the situation I'm using my code in than basic programming advice given to me. If a user says "I don't have access to module X, I'm not going to tell them 'use module X', or ask 'why can't you use it', but instead try to give them a possible solution. The advice of using modules is helpful, if the person intends to use modules, but I find it more important to have a viable solution.

    The conventions of cpan, and of a dbm are there to perl easier to use. However, the zealous 'use module X', and 'don't roll your own' point out another use of modules: to impel users to refractor their code. This is only helpful when a user is programming with the intent to find the best solution! Most perl-monks write code built for programmers, instead of code built for basic distribution to the perl-illiterate masses.

    Why use mySQL, when you don't have access to it? Or when a simple flat file system will suffice? Why force your users to download tons of modules, when you can incorporate your own code? It's interesting to observe that many web-hosting companies now offer 'developer packages', specifically designed to allow for these conventions. (Hard linked perl libs, pseudo-root access, access to the apache binaries.)

    Sometimes facing reality, and understanding that if you're making a widely distributed package, your users may not have the same access to modules, or programming abilities as you is necessary. All web hosting companies aren't up to allowing mySQL access, and not everybody is smart enough to understand the conventions of use lib; or @INC. If you orient your code towards them, the best solution may very well be unconventional means. (an example of this is a package I'm creating that dynamically loads tons of modules, and even assigns to @ISA, so that users will have very little to deal with, when using it.)

    You'll find a lot of widely used software makes similar amenities to users; catering to an Aristotelian sense of the lesser of two evils, by sacrificing conventions for ease of use for the average Joe. Is that bad or good? I don't know.

    Gyan Kapur
    gyan.kapur@rhhllp.com
      "...catering to an Aristotelian sense...is that bad or good? I don't know." It's good. Too often people underestimate the values of user-friendliness and assume that it is always necessary to be conventional. Besides, Aristotle has taught me very well. -Asymptotic Freedom
Re: Simplicity vs. Doing It Right
by PodMaster (Abbot) on Oct 13, 2002 at 06:41 UTC
    $^RANT=1;

    "The right way" is the easiest way for me. I don't wanna learn the inner workings of HTTP or other very complicated protocols, just so I can serve up some images and stuff.

    Learning to use modules is easier than learning complicated crap.

    Real problems or not, you do things your way, and others'll do it theirs, regardless.

    Looking it over, I got to thinking about some of the people who wander in here with real problems to solve. We usually advise them to use strict, use CGI, and reuse some set of modules from CPAN.
    Damn straight!

    This website is about learning perl, and while re-inventing wheels is fun and part of learning perl, why would you want to re-invent a wheel when you don't know how to use an existing one.

    Why would you want to help some poor sap DEBUG broken CGI parsing code?

    When is the last time you heard someone say "Help, I need to write a Win32 GUI program, but I can't use MFC of any other pre-written libraries for building GUI's, please help, this is a real world problem!!!!!!!!"

    When is the last time you heard someone say "Help, I need to write an assembler program to draw a nice GUI but I can't use any pre-written libraries. Please help this is a real world problem!!!!!"?

    Every time I hear "can't use this/or/that module", I think to myself, "well, then go elsewhere for help".

    ____________________________________________________
    ** The Third rule of perl club is a statement of fact: pod is sexy.

      Why would you want to re-invent a wheel when you don't know how to use an existing one.

      Because you have a problem to solve now, and the learning curve for the wheel is steep. Knowing about the wheel may be good enough. You can come back to it at your leisure later, and you'll have the benefit of having solved the problem the wheel purports to solve, so that you'll better appreciate it.

      Or perhaps the wheel requires an axle, and the axle requires a transmission, and before long you've picked up half a car along with the wheel, and your CGI now takes 5 seconds to start, and your ISP doesn't support mod_perl. You might still want to know about the wheel, but you also want to know about alternatives.

Re: Simplicity vs. Doing It Right
by Aristotle (Chancellor) on Oct 13, 2002 at 11:39 UTC

    jplindstrom++, best post on the thread IMO.

    He points out exactly the points my signature is meant to put in one short sentence. Any piece of software tends to grow if it does something useful; if you don't do it the right way right off the bat, you end up with a big ball of mud full of kludges and quick hacks that you'll basically have to rewrite from scratch, or spend a long time labourously refactoring.

    People sure are free to stray from the beaten path once they understand why so many people have walked it. Once you can tell exactly when and how your homegrown CGI or XML parser will break, you are free to write one when you think CGI or XML::* is overkill. But that's not something you will be able to do before you have acquired a fair degree of proficience in both the language as well as the specific problem domain.

    And by definition, anyone who comes to the monastery with questions about those isn't.

    Makeshifts last the longest.

      Any piece of software tends to grow if it does something useful

      I can't agree with this.

      Some software follows this pattern and when it does, it usually follows it right to the point where it does a lot of stuff which isn't useful and none of it very well. I think that's true whether its full of kludges or cleanly designed.

      In my experience, there is a whole helluva lot of software out there in the world which does something useful, requires almost zero maintenance, and runs daily on production servers. A lot of it is just the kind of makeshift stuff that dws is talking about, I think. That stuff doesn't grow. It just runs and no one touches it because they don't want anything to break.

      Funny, but I always thought your signature referenced that latter class of deployed code.

      -sauoq
      "My two cents aren't worth a dime.";
      
          Some software follows this pattern and when it does, it usually follows it right to the point where it does a lot of stuff which isn't useful and none of it very well. I think that's true whether its full of kludges or cleanly designed.

          In my experience, there is a whole helluva lot of software out there in the world which does something useful, requires almost zero maintenance, and runs daily on production servers. A lot of it is just the kind of makeshift stuff that dws is talking about, I think. That stuff doesn't grow. It just runs and no one touches it because they don't want anything to break.

        Part of the problem is that you can't know in advance which software will need to grow in functionality, need to be rewritten or need to be maintained.

        It's generally best to assume that the software will need to be maintained and that you should code with that in mind.

        How many times have you been stuck in a quagmire, fixing something that should have been built better in the first place? I know I have, many a time. In at least some of those cases, I know the original author never dreamed that this particular piece of software would be used so long, or other software would become so dependent on it when they implemented their "makeshift". I know this because I was that original author.

        In most of the cases when I wasn't the original author, I strongly suspect the original author didn't foresee the future maintenance.

Re: Simplicity vs. Doing It Right
by adrianh (Chancellor) on Oct 13, 2002 at 13:00 UTC

    This thread reminds me of some of the arguments about the implications of DoTheSimplestThingThatCouldPossiblyWork in XP (that's Extreme Programming this time... XP must win the award for most overused abbreviation...).

    I tend to think that:

    we'd better plan for the future by building solider solutions than are asked for

    is really a separate issue from

    leveraging tested, off-the-shelf code where possible to save effort.

    The latter is nearly always a good thing as far as I am concerned.

    The former can get you into trouble.

    You waste time building a complex solid solution that you then have to throw away when the requirements change.

    If you'd built a small problem-specific solution you would still have had to throw it away - but have wasted less time in producing it. Since the small solution would have taken less implementation time it would also have helped identify the fact that the requirements needed changing sooner - a good thing.

    The only time this falls down is if your in a development environment that doesn't give you the freedom to go back and refactor that problem-specific solution into something more generic if/when it becomes necessary.

Re: Simplicity vs. Doing It Right
by Anonymous Monk on Oct 13, 2002 at 18:23 UTC
    I absolutely agree. There is a reason why Perl is explicitly designed to allow "baby talk", but that point of view is not well-represented on PM.

    The real issue is that there are many different value systems that can reasonably be applied. That is the heart of Worse is Better - worse by one value system is better in another and that confuses the heck of humans who are inclined to think that "worse" and "better" have an absolute meaning. They do not.

    The value system in use here is that we value that which makes us more competent programmers. What you noticed us that you get different answers if you value getting the task at hand done. If you value making things easy for those who are not competent and who have no desire to become so, then you get completely opposite answers - Matt Wright becomes a good thing! Of course he is horrible if you care about security or code quality. I am only pointing out why a value system that we dislike can lead to popularity.

    A good question to ask is what our value system should be, and how well we satisfy it. I gave what I think it is above, but don't think we do a good job of it. We often prefer to repeat slogans than do the work needed to make them true. For instance we say use CGI or die; - but do we open up the source-code and see that $ENV{CONTENT_LENGTH} is only checked for multi-part post requests? (Bug report submitted.) We say that using standard components is good because people audit it - but do we notice evidence that it isn't all that well audited in practice? We cheer at claims that open source can't have back doors in it. But if I wanted to write my own backdoor I would just write in C and deliberately include a buffer overflow. If someone caught me, who would suspect a thing? How many back doors are on your system right now?

    Is what you call "obviously the right thing" necessarily as right as you think? And is Perl designed for the values that you personally hold dear? Really?

Re: Simplicity vs. Doing It Right
by chromatic (Archbishop) on Oct 13, 2002 at 07:05 UTC
    The "right way" adds weight that isn't always needed.

    Optimization (for speed) and simplicity are rarely allies.

    Update after response: Added parenthetical clarification.

        Optimization and simplicity are rarely allies.

      I disagree with this.

      What dws is referring to is not what I would call simplicity. It's what I would call extreme pragmatism. The application to which he refers would be much simpler had it been done the "right" way. Simpler, in that it would be simpler to understand, maintain and extend. As he describes it, it would take a pretty senior person to sort it out and extend it as it stood.

      Extreme pragmatism has it's place as well. Had the original programmer of this test program been given an edict from management that he would not be allowed to use a standard web server and that he had to implement this test suite in a week or there would be no test program, well, under those circumstances, I think the right choice was made.

      However, generally I subscribe to this:

      There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies. --C.A.R. Hoare

      The best optimizations can be performed on something that is designed to be straightforward and simple as possible. Only when the underlying design is clear can we begin to really understand where bottlenecks might arise or where there are repeated operations that can be profitably micro-optimized.

      I also happen to agree with this:

      Things should be made as simple as possible -- but no simpler. - A. Einstein

      Some things are inherently complex. It's simple to design a database as a group of text files and perform queries with greps, but it doesn't meet most design criteria for speed and flexibility. There is a balance.

Re: Simplicity vs. Doing It Right
by BrowserUk (Patriarch) on Oct 13, 2002 at 17:53 UTC

    ++dws, and thankyou for saying it. This is a similar point to the one I was trying to make in Being a heretic and going against the party line..

    It's not that I have anything using modules, nor that I don't appreciate the long term benefits of the tested-ness (sorry for that but I simply couldn't think of a better way of putting it) that comes from using the proven solutions and would probably do so in most cases for production work or anything that I had to revisit more than once.

    However, even assuming that you know the right module to use rather than needing to go through a long list of possibles to determine which one is right, it can still be a pain.

    As an example, in this post, I set out to 'do it the right way' and use CGI.pm. After nearly 2 hours of trying, I gave up and hand-rolled my html output because I could not work out the right combination of maps, anonymous arrays and CGI functions to produce the output I wanted. I came close a couple of times, but as soon as I tried to add the final peice to the puzzle it just crashed around my ears.

    So I opted for a loop and some print statements and it worked first time (honest!) and took less than 5 minutes. When I chose to add an extra (left-hand) column to my table, it was obvious exactly where and how to do it. I gave up completely on the idea when was trying to using CGI.pm.

    I admit, by the time I got to writing the homebrew version, I had the benefit of a lot of thinking that I had aquired trying to do the CGI/maps/distributive function method. Even so, a 24:1 ratio of time taken to achieve a solution (actually higher as I never did) would pay for a lot of maintainence were that ever necessary.

    I guess you pays your money and makes your choice, but if I am paying the bills, I will opt for the 'simple solution' over the 'right way' every time.


    Cor! Like yer ring! ... HALO dammit! ... 'Ave it yer way! Hal-lo, Mister la-de-da. ... Like yer ring!
      You must be referring to the code over at Re: Re: Hex color degrader. I decided to take the 'Pepsi Challenge'™, and - not to best you - i thought it would be benificial for the community to post a very similar solution that used CGI.pm's methods: (refer to said link for orginal code)
      print start_table, Tr(td(' '), th[@colors]); for my $scale ( -5 .. +5 ) { print Tr(td($scale/10)), Tr( map { td{bgcolor=>'#'.$_},$_ } map { Dlighken($_, $scale/10) } @colors ) ; } print end_table;
      Thanks to your existing code, it only took me about 5 minutes or so to port - however, when one considers how much total time that i have spent learning how to massage CGI.pm, i feel that your argument here bears even more weight. I liken this to playing a musical instrument - the time one spends practicing different scales, licks, chops, etc. will, in the long run, improve one's improv skills. But, when on the spot ... it's best to stick with what one knows right now.

      Cool code, BTW. :)

      jeffa

      L-LL-L--L-LL-L--L-LL-L--
      -R--R-RR-R--R-RR-R--R-RR
      B--B--B--B--B--B--B--B--
      H---H---H---H---H---H---
      (the triplet paradiddle with high-hat)
      

        In my defense, I was trying to be a little more CGI.pm about it and adopt the 'poetry mode'. With another hour of giggling I finally cracked what I was so close to before.

        This produces the same out put as my original above and uses but a single print statement, in deference to all those post I've seen saying "You don't want all them print statements, use CGI.pm" ;^).

        Whether this is better or worse than my original or yours I'll leave it to the reader to make their own conclusions.

        #! perl -sw use strict; use CGI qw/:standard *table/; use CGI::pretty; sub Dlighken { return sprintf '%02x'x3, map{ ($_ *= 1+$_[1]) > 0xff ? 0xff : $_ } map hex, unpack 'A2'x3, $_[0]; } my @colors = qw/800000 808000 008000 008080 000080 808080 c0c0c0/; print table( Tr( th(' '), th([@colors]) ), map{ my $scale = $_/10; Tr( td($_), map{ td({bgcolor=>"$_"},[$_]) } map{ Dlighken($_, $scale) } @colors ) }( -5 .. +5 ) ) ; __END__

        Cor! Like yer ring! ... HALO dammit! ... 'Ave it yer way! Hal-lo, Mister la-de-da. ... Like yer ring!

      I hardly see a reason to use CGI's HTML generation routines - the output is still hardcoded, so you don't really win a thing over just putting strings in your code. The only possible difference is it automatically produces well formed XHTML. Shrug. If I hardcode my HTML for a small script, I use a heredoc or something. Anything else, I use Template Toolkit II. (Literally everything; I'm starting to use it in command line scripts too.)

      If you look around you'll see that there's hardly anything like a consensus that CGI.pm's HTML routines are The Right Way, and in fact some quite highly regarded folks will tell you to first look for another solution.

      Just because a module offers some functionality, even if it's a core module, doesn't mean you have to or even should use it. FindBin f.ex is so badly broken I actually find it sort of embarrassing that it's in the core.

      Makeshifts last the longest.

Re: Simplicity vs. Doing It Right
by sauoq (Abbot) on Oct 13, 2002 at 23:42 UTC

    This was one of the better posts I've read here. I've read a lot of great posts here too, so that's saying something.

    I got to thinking about some of the people who wander in here with real problems to solve. We usually advise them . . .

    I usually dole out such generic advice because I'm sure that it's not wrong in most cases rather than because I think it is best for a specific case. It usually works out because if someone is asking a question which can be met with a canned answer then they are likely to be inexperienced enough that the canned answer will serve them best anyway.

    I don't always follow my own advice. I often write scripts in which I implement the one hundredth of Random::Module's functionality that I need because I don't need the other ninety-nine percent of it. I make those decisions with my eye's open though. I keep my requirements in mind and I research alternatives if I'm not already familiar with them.

    Available modules are often not the best solution. Using them doesn't always result in more robustness, more maintainability, or less work.

    Using modules requires that you rely on the work of people outside your organization and code which isn't developed with your guidelines, procedures, or standards in mind. Module authors usually aren't paid for their work and maintenance can be sporadic.

    Depending on your environment, installing modules may be difficult. Keeping versions synchronized may be realistically impossible. Upgrading modules can break current functionality. Unlike binary shared libraries, getting multiple versions of a Perl module to coexist isn't always easy to do. Modules often contain XS code or rely on external libraries for which cross-platform support isn't as good as it is for Perl itself.

    Just as the module may add some overhead to the project, the module's interface may well add overhead to your actual code. Abstractions chosen by module authors in an attempt to make some difficult tasks relatively simple may also make some simple tasks relatively difficult and the added complexity can be more hassle than it is worth. This is evident in the number of *::Easy and *::Simple modules on CPAN.

    A regular expression to extract an easily identifiable link from some HTML might well make more sense than parsing the whole document and it might be a lot more robust too. It's perfectly acceptable to just print "Content-type: text/html\n\n"; and get on with life if your CGI script doesn't take parameters. Why not open a socket and print "HEAD / HTTP/1.0\n\n"; if that's really all you need to do?

    Knowing what you really need to is, afterall, the key. If you need to make multiple head requests for documents residing on different servers then you should probably be using LWP. If your CGI application is even mildy complex then you are better off using CGI.pm. If your application needs to extract links from random HTML, then parse it.

    Canned solutions work well in the average case. I think the same can be said for the canned advice we give.

    -sauoq
    "My two cents aren't worth a dime.";