http://www.perlmonks.org?node_id=1114424

You've got your typical company started by ex-software salesmen, where everything is Sales Sales Sales and we all exist to drive more sales.

On the other extreme you have typical software companies built by ex-programmers. These companies are harder to find because in most circumstances they keep quietly to themselves, polishing code in a garret somewhere, which nobody ever finds, and so they fade quietly into oblivion right after the Great Ruby Rewrite, their earth-changing refactoring-code code somehow unappreciated by The People.

-- The Developer Abstraction Layer by Joel Spolsky

Though my natural inclination is to be a bit OCD about keeping code clean, I concede that spending too much time and money on refactoring, writing programmer tools, and endlessly polishing code will likely lead to commercial failure. As will the converse, namely neglecting your developers and their code and architectures in favour of sales and marketing. Successful software companies tend to have a healthy balance.

Refactoring

Booking.com, perhaps the most commercially successful Perl-based company, has caused a bit of controversy over the years with their attitude towards refactoring. To give you a flavour, I present a couple of comments below:

Booking is destroying my career because I am not allowed to do anything new. I am not allowed to use new technologies. I'm not allowed to "design" anything big. I am not allowed to write tests. I am allowed to copy that 500 line subroutine into another module. If people have done that several times before, maybe it should be refactored instead of duplicated? If you do that, you get in trouble. As one boss says, "we do not pay you to write nice code. We pay you to get job done."

Management, and the term is quite lose when applied to Booking.com, sees no gain in refactoring code. By refactoring I'm talking about taking a few weeks to rewrite an existing piece of software. By definition refactoring doesn't bring new functionality so this is why management is reluctant to go down that road. We're quite lenient about code that gets added to the repo, as long as there's a business reason behind it. If a quick hack can be deployed live and increase conversion then it will be accepted. But rest assured that crappy code doesn't last long, specially if other devs have to use it or maintain it.

-- from Truth about Booking.com (Blog)

One of the posts specifically deals with the culture of "get it done and fast" and how they do not encourage refactoring or basic testing. I actually work in a Perl shop where management has the same kind of mentality, and it is slowly killing our efficiency.

Regarding testing, it's true that we're not very unit testing focused. This is mainly because we've decided to spend most of the time/money/infrastructure that you might usually spend on unit testing on monitoring instead. If you have unit tests you still need monitoring, but in practice if your monitoring is good enough and you have an infrastructure to quickly rollout & rollback systems you can replace much of unit testing with monitoring.

We're not adverse to refactoring when appropriate. But if you're going to propose rewriting some code here you'll actually have to make a compelling case for it which isn't just "the old code is hairy". Do you actually understand what it does? Maybe it's hairy and complex because it's solving a hairy and complex problem. Are you not aware of where this system fits into the big picture? We've also had code that's looks fantastic, had tests, used lots of best practices that we've had to throw away completely because it was implementing some idea that turned out to be plain stupid.

-- from What exactly is up with Booking.com? (reddit)

Opportunistic Refactoring and The Boy Scout Rule

Some people object to such refactoring as taking time away from working on a valuable feature. But the whole point of refactoring is that it makes the code base easier to work with, thus allowing the team to add value more quickly. If you don't spend time on taking your opportunities to refactor, then the code base gradually degrades and you're faced with slower progress and difficult conversations with sponsors about refactoring iterations.

There is a genuine danger of going down a rabbit hole here, as you fix one thing you spot another, and another, and before long you're deep in yak hair. Skillful opportunistic refactoring requires good judgement, where you decide when to call it a day. You want to leave the code better than you found it, but it can also wait for another visit to make it the way you'd really like to see it. If you always make things a little better, then repeated applications will make a big impact that's focused on the areas that are frequently visited - which are exactly the areas where clean code is most valuable.

-- Opportunistic Refactoring (Martin Fowler)

The Boy Scouts have a rule: "Always leave the campground cleaner than you found it"

What if we followed a similar rule in our code: "Always check a module in cleaner than when you checked it out"

-- The Boy Scout Rule (O'Reilly)

At work, we perform opportunistic refactoring following the Boy Scout rule, trusting the judgement of developers. How do you do it at your workplace?

Code Reviews

By way of background, my company went agile about five years ago, at first with great zealotry, nowadays with more maturity and less dogma.

Before check-in, all code must be reviewed, either continuously via pair programming, or via a lightweight code review (typically over-the-shoulder). We also have a coding standard, though it is not strongly enforced.

To give a concrete example, during a code review the other day, I persuaded the author to eliminate unnecessary repetition by changing this snippet:

my $config = <<'GROK'; ADD UDP_LISTENER ( 515 ) ADD UDP_LISTENER ( 616, 657 ) ADD UDP_LISTENER ( 987 ) GROK my @test_cases = ( { desc => "# Test 1", conf => $config, find => [ 'port = 515', 'port = 616', 'port = 657', 'port = 987' + ], }, );
to:
my $liststr = 'ADD UDP_LISTENER'; my @ports = ( 515, 616, 657, 987 ); my $config = <<"GROK"; $liststr ( $ports[0] ) $liststr ( $ports[1], $ports[2] ) $liststr ( $ports[3] ) GROK my @test_cases = ( { desc => "# Test 1", conf => $config, find => [ map { "port = $_" } @ports ], }, );

What would you have done?

I'm sure some other programmers at my company wouldn't have bothered suggesting any changes at all: after all, the code worked as is, it's pretty clear, plus "it's only a test script", so why bother?

Though I felt the code was more maintainable with duplication eliminated, I had another motivation in this specific case: training. You see, the programmer in question was very new to Perl and, as I found out during the review, had never used map before! Training (and improved teamwork) are important benefits of code reviews.

Eliminating unnecessary duplication and repetition is a common discussion topic during code review in my experience. (Note: I did not include this example to argue further about what DRY means exactly in Room 12A :). Other common discussion points during code review are:

Note that we do not normally discuss code layout because all code is pushed through Perl::Tidy before review.

I'm interested to learn about your workplace experiences. In particular:

Cleverness

To finish, here's another one, derived from Clever vs. Readable.

Would this statement pass your code review?

my $value = [ $x => $y ] -> [ $y <= $x ];
If not, would you suggest changing it to:
my $value = $x < $y ? $x : $y;
or:
use List::Util qw(min); my $value = min( $x, $y );
Or something else?

References

Updated Aug 2015: Added Ternary vs. Sort vs. Max reference.

Replies are listed 'Best First'.
Re: The Boy Scout Rule
by hippo (Bishop) on Jan 25, 2015 at 14:04 UTC

    To answer the last question, I have to say that

    my $value = [ $x => $y ] -> [ $y <= $x ];

    would not pass my code review. It is clever, but apparently pointlessly so. The fat comma in particular appears to be present only to cause confusion - a normal comma would add (a little) clarity. This code, even if commented, would cause many programmers to pause while they worked out quite what it was doing. That may only take a couple of seconds for an expert but pity the poor Perl newcomer who stumbles upon this.

    For my money the ternary conditional version is perfectly clear and without overhead and would be the way I would choose to code this operation.


    Regarding code reviews in general and workspace policies - I am essentially freelance and therefore am exposed to a wide range of different restrictions and policies. Generally speaking there are coding standards and a lot of the time these are enforced for the most part automatically (ie. on commit or pre-release). I don't do any pair-programming but there are code reviews of varying nature most of which would fit into your "lightweight" category. They don't tend to dwell on the minutiae; it is more a case of establishing clarity of purpose and eliminating flaws in security and robustness and promoting efficiency.

    It is my personal belief (opinion alert!) that it will benefit any programmer to be exposed to code written in a wide variety of styles. That is partly why I am here in these hallowed halls. Here I see idioms, layouts, compound operators, data structures and algorithms which I would not generally have considered myself, to say nothing of being introduced to many useful modules which would otherwise have escaped my attention. With that in mind, communication between programmers whether on online fora or within (or between) development teams or even RLMs such as Perl Mongers are to be encouraged.

    Thanks for this interesting meditation.

    Hippo

      my $value = [ $x => $y ] -> [ $y <= $x ]; would not pass my code review.

      I agree it’s confusing the first time and the => should be a skinny comma :P instead. That said, the Schwartzian transform is even more confusing the first time you see it. No one in a post 5.6 Perl world would suggest rewriting it with a bunch of temp arrays and for blocks. So, I advocate simple little idioms like the above when they offer something more than clever/pretty code.

      So, thinking to possibly defend the clever/pretty one, I tried a Benchmark, which I'm not necessarily doing right so someone please jump in if it’s badly formed, and the one that might be the most semantically clear and I assumed would be the slowest is the fastest by a good measure. I forgot that List::Util is XS.

      use strictures; use Benchmark "cmpthese"; use List::Util "min"; # This is XS. my @xy = ( [ 1, 0 ], [ 0, 1 ], [ 0, 0 ], [ 1, 1 ], [ 1_000_000, 999_999 ], [ 999_999, 1_000_000 ] ); my $m; # Avoid void in comparisons. cmpthese(10_000_000, { list_util => sub { $m = min(@$_) for @xy }, ternary => sub { $m = $_->[0] < $_->[1] ? $_->[0] : $_->[1] for +@xy }, clever => sub { $m = [ $_->[0], $_->[1] ]->[ $_->[0] <= $_->[1] + ] for @xy }, });
      Rate clever ternary list_util clever 347584/s -- -56% -68% ternary 792393/s 128% -- -28% list_util 1096491/s 215% 38% --

        Several problems:

        • With just 6 comparisons & assignments being run; the overhead of two subroutine calls -- the one you wrapped your tests in and the one Benchmarks wraps those in internally -- becomes a significant factor in the tests.

          Pass strings instead of subs to remove one layer of sub-call overhead; and allow benchmark to eval them into subroutines. (It's going to anyway!)

          Use an internal loop multiplier to re-balance the test/overhead.

        • By passing your args wrapped in anonymous arrays -- thus forcing the ternary to do 3 dereferences; and the clever to do 4 dereferences; whereas List::Util only does one -- you bias the tests strongly in List::Util's favour.

          Most min() operations will operate on simple scalars so use those instead.

        • (Minor.) There is little value in using different and big integers; they are all just IVs (or UVs) as far as the comparisons are concerned.

          Test for differences in ordering (branch/no branch) by coding separate tests.

        The upshot is that the ordering makes no consistent difference (Ie. it flip flops from run to run); and that the ternary is hands down winner for the two simple scalars, common case:

        use strictures; use Benchmark "cmpthese"; use List::Util "min"; # This is XS. cmpthese -1, { list_util_nb => q[ my( $x, $y ) = ( 0, 1 ); my $m = min( $x, $y +) for 1 .. 1000; ], ternary_nb => q[ my( $x, $y ) = ( 0, 1 ); my $m = $x < $y + ? $x : $y for 1 .. 1000; ], clever_nb => q[ my( $x, $y ) = ( 0, 1 ); my $m = [ $x, $y + ]->[ $x <= $y ] for 1 .. 1000; ], list_util_b => q[ my( $x, $y ) = ( 1, 0 ); my $m = min( $x, $y +) for 1 .. 1000; ], ternary_b => q[ my( $x, $y ) = ( 1, 0 ); my $m = $x < $y + ? $x : $y for 1 .. 1000; ], clever_b => q[ my( $x, $y ) = ( 1, 0 ); my $m = [ $x, $y +]->[ $x <= $y ] for 1 .. 1000; ], }; __END__ C:\test>junk30 Rate clever_b clever_nb list_util_nb list_util_b ternar +y_b ternary_nb clever_b 1210/s -- -11% -67% -70% - +79% -80% clever_nb 1356/s 12% -- -63% -67% - +76% -78% list_util_nb 3694/s 205% 172% -- -9% - +34% -40% list_util_b 4062/s 236% 200% 10% -- - +28% -33% ternary_b 5630/s 365% 315% 52% 39% + -- -8% ternary_nb 6107/s 405% 351% 65% 50% + 8% -- C:\test>junk30 Rate clever_nb clever_b list_util_b list_util_nb ternar +y_b ternary_nb clever_nb 1297/s -- -5% -68% -69% - +75% -77% clever_b 1372/s 6% -- -66% -67% - +74% -75% list_util_b 4078/s 214% 197% -- -3% - +22% -27% list_util_nb 4190/s 223% 205% 3% -- - +20% -25% ternary_b 5228/s 303% 281% 28% 25% + -- -6% ternary_nb 5556/s 328% 305% 36% 33% + 6% --

        List::Util::min() will obviously win in both speed and clarity for the min( @array ) case.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
        In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked
Re: The Boy Scout Rule
by BrowserUk (Patriarch) on Jan 25, 2015 at 23:19 UTC

    Booking.com

    As the major contributor to their parent company's (Priceline) $4.8 billion annual revenue, $1.1 billion profit and $29 billion market cap., they appear to be doing something right. And given that their core business model has remained essentially the same since they were taken over in 2005, perhaps (part of) the secret that makes this Perl-based company stand out from its (original) peers, is that it hasn't succumbed to any of the fads that place the programming process and programmers above the business model.

    Their basic business model hasn't changed; thus the required processes haven't changed much. When new code is required, it is very likely to need to do something very similar to stuff that already exists, and is proven to work.

    You don't re-write (or refactor) code unless there is an identifiable, demonstrable reason that benefits the business revenue stream. Entertaining programmers is not such a reason.

    Sure, there are times when it is possible to make the case that rewriting a piece of working code will benefit the business -- by improving performance; or simplifying (an existing, bad history of difficult) maintenance; or perhaps reducing runtime memory requirement by combining two or more similar piece of code into one. But the case needs to be made and demonstrated. First.

    Opportunistic Refactoring

    My take on opportunistic refactoring is different from the interpretation I read here. Rather than: I've got some time on my hands so lets go looking for something to change; I interpret it to mean: I am in this piece of code anyway -- due to a bug to fix or functionality to add or change -- and if I see something else here that can be (demonstrably) improved whilst I'm here, and then make a case for doing so.

    Example 1.

    First, I think your changes obfuscated rather than clarified that code:

    1. You introduce two extra variables.
    2. You didn't remove the repetition:

      just substituted 3 occurrences of a meaningless variable name for 3 occurrences of a self describing text constant; and four occurrences of another variable name, plus 4 integer constants (index numbers), that have to be visually cross-referenced with the actual, meaningful integers.

    The original code is instantly clear and readable to its purpose; the refactor involves 3 levels of mental indirection to undo what you did.

    The training element of introducing the programmer to map is barely justification for such changes.

    And finally, if you have the time to faff around refactoring test code, you are under-employed.

    Cleverness

    I'm not adverse to clever code; but there is nothing clever about that. It's not clearer. It's not simpler. It's not more efficient. It's not even less typing.

    Just obfuscate.

    What would I have done?

    Depends. If it was in test code; I'd probably insisted that the programmer that wrote it, described what it does and how it does it, in an adjacent comment, and then i'd pick holes in that description, until it was fully explained in excruciating detail. Something like:

    1. It creates a list from the two scalars;
    2. Constructs a two element anonymous array;
    3. Compares the two scalars;
    4. Converts the boolean result into an index;
    5. Dereferences the anonymous array;
    6. Applies the index to it;
    7. Extracts the selected scalar from the anonymous array and assigns it to the result;
    8. Discards the anonymous array it constructed.

    And I would nit-pick that description until it was precisely, & exhaustively accurate.

    I'm not suggesting the above is totally accurate; but the point is that teaching programmers to understand the consequences of their choices, is far more effective than laying down thou shalt/shalt not edicts.

    If it was production code, I do the same; and then require it be changed to the ternary form.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
    In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked
      Thanks for intersting read, unfortunately i cannot add nothing about code refactoring or reviewing because all the code i do at work is on my onw, i was never asked to write it nor someone has the ability to read it; i'm the one with the Perl mania..

      About clever code. Many times (most? everytimes?) words and senteces hide the real sense or just mislead us about focusing on the right part. If you dismbiguate what 'clever code' can means you'll end with many different things and a bounch of questions.

      First: who is clever? the code or the coder. And clever means the same if referred to a human and to a compiler?
      When referred to a human a clever code(r) seems more near to tricky, ingenious and skillful: when you afford to understand lines of such code you think 'Oh wow it can be done in such way?' the same sensation as watching to a barman doing cocktails with flying bottles: beautiful, meaningless (in the sense that is not required) and risky. The same goal can be reached in plain, safer way with no downside (apart from the girl/boy impression effect).

      By the point of view of a compiler i think that clever code could means: no unnecessary memory allocation and faster computation, no ambiguity. I suspect Perl prefere boring code, verbose and explicit one. You can add to the meaning also simpler, more readable and less typing because normally code writing is issued by humans so interaction is important.

      So code is clever when has less requirements, use less memory-disk-network and run faster in the overall. Portabilty, robustness and maintainability also play. In such code the cleverness of the coder is not immediatly appreciable but is deeper and more effective.

      That said, as i run on my own, coding in Perl has to be amusing too, so some tricky lines here and there can survive too to the quality inspection.

      You said 'quality' please define 'quality'.. .. ooh no! ;=)

      L*
      UPDATE 27 01 2015: about quality, as found in blindlike's homenode:

      As Albert Collins once said,
      "Simple music is the hardest music to play and blues is simple music."
      I'm trying to write simple code, just as hard as I'm trying to play simple music.


      L*
      There are no rules, there are no thumbs..
      Reinvent the wheel, then learn The Wheel; may be one day you reinvent one of THE WHEELS.
Re: The Boy Scout Rule
by blindluke (Hermit) on Jan 25, 2015 at 11:06 UTC

    Thank you for this meditation. I work on the operations side of things, and the code we do here is mostly automation and monitoring tools. Due to their focused scope, they are usually created by a single author, and maintained by a single person, usually the author of the solution himself. The approach to refactoring was once described by one of my colleagues as:

    Each time I notice a nice trick, a better way of doing things, or a good module, I do a quick scan of my existing code base, to check if it can be improved by the "new thing".

    That's the way it looks - the refactoring is not triggered by the passage of time, but is strictly event - based. There is no weekly code review, no monthly refactoring phase. Just noticing new, better ways of doing stuff.

    This leads me to two observations: first, in an environment like this, communication is crucial. If I notice a new module, I spread the news, since it might trigger an improvement. If someone tells me about a simple data structure he used in his script, it might lead to improvements in my code. Talk about the new, better things as often as possible.

    The second observation is: don't try to refactor code that you don't want to be responsible for. If you see something that seems 'wrong' to you in someone else's code, either introduce the change and take the responsibility for maintaining the script afterwards, or just make the suggestion to the person currently maintaining the script. When working in a place that has people, not teams, maintaining the scripts, it's possible that something that would be more clear and maintainable to you, will not seem that way to the owner / maintainer of the script. Convince him, or let him convince you, either way, engage in communication.

    - Luke

Re: The Boy Scout Rule
by flexvault (Monsignor) on Jan 26, 2015 at 10:25 UTC

    Hello eyepopslikeamosquito,

    I enjoyed your post and thanks for the research and references.

    I would add as a reference the book "The Lean Startup" by Eric Ries. Not because it's Perl related ( it isn't ), but because it discusses in detail the conflict between "business" and "programming" value(s) to a company. Your discussion about 'bookings.com' brought this book to mind.

    As a programmer, I have always wanted to get a "perfect" finished product before announcing/shipping it. The book was written by a programmer and that was how he was taught. But he discovered that the best way was to build a MVP, or Minimum Viable Product and then test the waters, and then retest again and again. He also found that because of how he was trained, he was a major stumbling block for building a successful business.

    This book changed how I look at programming and business. I don't try to perfect something that nobody wants, and I suspect ( IMHO ) that the successful software ( or depend on software ) companies prefer a MVP to a programmer's perfect product. (YMMV)

    Regards...Ed

    "Well done is better than well said." - Benjamin Franklin

Re: The Boy Scout Rule
by choroba (Cardinal) on Jan 26, 2015 at 21:22 UTC
    Interestingly, it's the middle management in this company that forces us to use the "Boy Scout Rule" (under an even crazier name). The reasons? The low management is content with the "getting job done", as they've been for the last ten years. As a result, it's almost impossible to hire a new programmer who wouldn't flee in a couple of months. The code is ugly to touch, untested, uncommented, copypasted, cargoculted, etc. The "technical debt" is so huge they're able to measure it in cash. So, our team was hired to make things move, to improve the situation, bring in new technologies, show new tricks to the old dogs (read: rewrite everything in Java). We teach them why testing is needed, what advantages git has over CVS, how code review helps all the participants. I'm still unsure we can make it; and so are my colleagues: three of my five closest coworkers already left for greener pastures.
    لսႽ† ᥲᥒ⚪⟊Ⴙᘓᖇ Ꮅᘓᖇ⎱ Ⴙᥲ𝇋ƙᘓᖇ
      The code is ugly to touch, untested, uncommented, copypasted, cargoculted, etc. The "technical debt" is so huge they're able to measure it in cash.

      I don't suppose there is any way of you showing us a sample is there?

      Perhaps after preprocessing to remove any identifying marks.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
      In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked
        You can find some interesting parts in my questions and meditations in the last year. However, most of the code is, unfortunately, agonisingly boring.
        لսႽ† ᥲᥒ⚪⟊Ⴙᘓᖇ Ꮅᘓᖇ⎱ Ⴙᥲ𝇋ƙᘓᖇ

      This is usually the status quo that I initially walk into.   The first, and perhaps the most difficult, step is to persuade management to treat a software project exactly as they would treat “the building of an automatic machine.”

      Computer software is, in fact, an automaton.   Acting only and completely on the yes/no instructions of which it consists, the machine is expected to correctly perform (or, correctly and meaningfully decline to perform) a real-world business process for a business consisting of humans.   If the software code-base here really is as you describe it to be, the root cause of the problem lies in [the lack of] software project management.   The code was “untested,” yet it was released and is in service.   There is no such thing as “technical debt,” but the business cost of software failure – or even, inadequacy – is more than “measured in cash.”   If the organization does not fundamentally change its approach to software building, then any “rewrite everything in” successor will merely suffer the same fate.

      Usually, the root cause of the problems do not lie in the day-to-day activities of the code writers.   The problem is upstream of this, in the business itself.   But this is partly a social consequence of the very attitude that Joel’s article (Joel knows his audience ...) speaks to:   that the software developer’s job is to take “business requirements” and to “write code” for it, and that those requirements ... a wish-list, really ... can, in fact, be changed arbitrarily without harm or consequence.   Strangely, no one thinks that way when designing buildings or physical machinery.   Yet, computer software is a machine with more degrees-of-freedom and loose-motion than any physical artifice could ever have.

      If you find yourself speaking to “CVS vs. git,” then this is probably a symptom of “lack of version control and/or of release discipline.”   If you are even discussing the importance of code-review and testing, it’s a symptom that these things are not burned-into the organizations process culture.   Basically, that there is no process culture.   A dire situation like this one must be simultaneously addressed at multiple levels:   (Okay, a taste of what I do for a living ... tough love.)

      1. Triage:   Stop the bleeding.   Get control of business blood-pressure, even if you must amputate a limb of the existing app/web-site (temporarily ... or not ...) in order to stabilize what’s left.   Stop all “future development,” because it will not matter in the end if a corpse [failed business ...] has yet-another half-grown arm in its [defunct ...] web site.
      2. Eliminate “self-serving software excuses for” actual project management:   Out with the Scrum, the Agile, the euphemisms like “technical debt.”   “Everyone, please sit down.”   No amount of quibbling about exactly how a group of software-writers spends their work day will, in the end, make the slightest bit of difference, as long as the teams are being asked to perform a series of tasks that are not rigorously defined before being presented to them, having first gone through an analysis & planning stage which translates business requirements into modifications to a now-moving machine otherwise known as “the application.”   Yellow sticky-notes don’t solve anything, and focusing on such things is merely indicative of the root problem.   Carpenters and masons and electricians do not have discretion.
      3. Get to “Big-D Done,” then “Move From Done to Done”:  
        1 = It Works, completely, perfectly, and in all cases.   0 = It Doesn’t.
        “Yeah, it sucks to be binary,” but a digital computer is.   The software machine consists of millions of freely-interacting moving parts, all of which are “either Yes or No.”   “Either Done or Not-Done.”   “Proved to be Correct, and to stay Correct in all cases.   Or, not.”   You can’t talk about “technical debt” because functionality is either in the product or it’s not, and the cost of any change is the same in terms of its risk to product stability.   You didn’t “incur a debt to be paid-back later.”   You didn’t do it.   And even if you did, it’s most likely not Done.™

      I could continue, but I’d have to charge you.   ;-)   Basically, software development fails consistently because the work actually consists of building an automated, moving, piece of machinery but nobody approaches the task in that way.   Programmers focus on how they arrange their tool-boxes, what they wear to work, and where they stand at 10:00.   Business owners stand at a distance, staring at metrics but without knowledge of the process.   Incomplete requirements are handed down because those that supply them don’t know what is required.   Changes are handed down ... but without a change-order process ... because neither party understands the cost and risk of “any change at all.”   And the software machine chugs along, full of broken parts, incomplete behavior, and badly-dented covers (emitting foul smoke) which haven’t been opened in years.

      The business failure, though, is not a failure of computer technology, nor of the language(s) that are used.   The business failure is ... well ... a business process failure.   But it is also a failure to recognize that the singular ruling constraint of this kind of project – altogether different from any other type of project – is the software machine.   At the end of the day, no one is there but the machine and its user.   The programmers, the managers, the testers, no one has any direct influence on what the machine does.   No other type of project that has ever been “managed” has that characteristic, and “that characteristic” trumps all other concerns.   It is the Nature of The Beast.

      (You can find it on Kindle (Amazon) now; soon to be on Apple platforms too:   Managing the Mechanism, by Vincent P. North.)

        If you find yourself speaking to “CVS vs. git,” then this is probably a symptom of “lack of version control and/or of release discipline.”

        That is ludicrous. There are literally CVS operations that can take an hour which take 10 seconds in git. CVS is the IE of RCSes.

        I could continue, but I’d have to charge you.

        I suspect consumer protection law would be in play at that point.

        Software does get treated differently. The perception being that is is "soft", therefore malleable.

        Our requirements group prepares requirements for 3 groups: Mechanical, Electrical and Software. They are familiar with and use the processes required for mechanical and electrical specifications. But when we (the software group) ask them to follow the same processes they follow for mechanical and electrical, they claim that those processes are too slow so they would not be able to deliver specifications to us in time. So they would need to issue preliminary specifications to us - but that would be extra work, so better to keep using the current processes.

        Then, the upper level managers state that the business case for using software at all is the flexibility software allows and the speed it can be developed. Therefore, if we follow processes oriented for creating hardware, we negate the business case for using software.

        Of course, when we get incomplete/ambiguous/self-contradictory specifications, we still get blamed for not delivering what was wanted. And when we do ask for clarifications, we get blamed for the delays introduced by the need to respond to our questions.

        So why do software developers keep developing software?

        At least for my team, most of the time we have fun making our software make electro-mechanical "contraptions" do things.

Re: The Boy Scout Rule
by karlgoethebier (Abbot) on Jan 26, 2015 at 10:04 UTC
    "Would this statement pass your code review?..."

    I would use List::Util by all means.

    See also Don't be clever.

    Best regards, Karl

    «The Crux of the Biscuit is the Apostrophe»

      Yeah, using max() for two values is going overboard :) its just like using bitshifting on codethinkied ... which is just like the anon-array-dereference eyepopslikeamosquito posted .... just use the ternary or if/else
Re: The Boy Scout Rule
by Anonymous Monk on Jan 25, 2015 at 19:56 UTC

    What would you have done?

    I wouldn't write this  $liststr ( $ports[1], $ports[2] )

    I'd write this instead  find => [ map { "port = $_ " } $config =~ m{(\d+)}g ],

    Why? I don't see any benefit to introduce two vars and a set of "magic numbers" (yes misusing the term I know)

    Also, I'd never let  desc => "# Test 1", remain, testing modules number tests , humans should name them , so "find the ports" or "find four ports" or "find four farts"

     

    FWiW, I've heard of "Always leave the campground cleaner than you found it" but AFAIK its not a Boy Scouts rule

Re: The Boy Scout Rule
by Anonymous Monk on Jan 25, 2015 at 15:34 UTC

    Well, Joel is an experienced writer who knows how to address his audience.   You are, of course, sailing on a yacht, not a dinghy, and you are intended to identify most-specifically with his “old salt.”   But the point of view of the rest of the yacht’s crew, and of the millionaire owner, and of every customer that the yacht exists to serve, must also be taken into account, too.   The fact that you are placed into a well-supported bubble also means that your point-of-view is not the only one that must be counted.   And, this is where a lot of the friction arises.

    These days, I am mostly a consultant, mostly dealing with existing projects that were written in a variety of languages, including but not limited to Perl.   These projects now have “gray-hair problems,” yet for the most part they are also still earning revenue from still-satisfied clients.   The developers (who are still left), however, always want to “refactor” the code ... to make it, somehow, “–er.”   They insist that it must be done; that their careers are eroding before their eyes without it.   But that’s not the business’s proper point-of-view, and this they do not see.   They count the business owners as being both uninformed and clueless, and often leave perfectly-good jobs for what is no good reason.

    bookings.com, for example, exists for two purposes:   to help travelers make bookings, and to help travel professionals receive the benefit of those bookings.”   The company has been financially successful, but not because of Perl and not in spite of it.   Every day, it sails into waters surrounded by hungry sharks and enemy submarines.   If Bookings makes the slightest mis-step, or shows the slightest sign of weakness, they will pounce.   There will be no second chance.

    So, a primary testing-concern for Bookings is to be able to ensure that the software does not degrade, as seen by either of its two sets of paying customers.   The number-one concern is not whether the crusty-old code remains crusty (it will ...), but that it continues to earn revenue without incurring returns or loss of goodwill.   “Refactoring” is merely a euphemism for “[partial or total] rewriting.”   The business risk of doing any such thing is enormous, but any change whatever to software that is in service carries similarly disastrous risks.   The one and only way to counter that risk is through effective present-state and future-state Testing.   Testing which may or may not exist, and which, if it does exist, might not be adequate to avoid ... regression.   (And it is not being hyperbolic to say that, “well, the Titanic ‘regressed.’”)   There is no room for error, because the potential business risk is infinite.   Those sharks and submarines won’t leave any flotsam behind.

    Therefore, it is most-important to be certain that each change which is introduced into the (now-legacy) code base is clearly understood, correctly installed and then deployed, and that it is known in advance (by objective testing) that regression will not occur ... so that it never does ... so that the torpedoes always miss and the sharks remain hungry.   These are procedural things, and IMHO “software testing” is especially about that procedure.   Testing is the minesweepers and anti-submarine craft which always sail in front of the fleet, and you can be quite sure they’re not just sitting on the foredeck, looking out at the surface of the water and saying self-confidently, “I don’t see anything.”

      ... and to anyone who says that PerlMonks does not log you out and allow your post to go as Anonymous Monk, leaving you with no ownership and no recourse ... well, it just did.   Again.   :-[   The post to which this is a reply, used to be my post.   So, if you like, here’s your substitute downvote-target.

        Someone else suggested this already, I repeat: a good software dev would be able to write up a bug report for this showing how to repeat it with exact steps and maybe Network/HTML trace from the debug console of any modern browser. It would be easy enough to turn that on persistently until the bug happened again. Then submit the relevant portion of the log (with passwords scrubbed if present) to pmdev.

        I have no idea if this is a real bug or just user/user-env error and neither do you. Since it’s never happened to me and I’ve never seen anyone else mention it, I lean toward the latter.

        So where are the technical details sundialsvc4 ? What good is it to say "it happened again I'm special downvote away" if you're really interested in correcting the problem?
Re: The Boy Scout Rule
by sundialsvc4 (Abbot) on Jan 26, 2015 at 14:12 UTC

    My take on opportunistic refactoring is different from the interpretation I read here.   Rather than: I've got some time on my hands so lets go looking for something to change; I interpret it to mean: I am in this piece of code anyway -- due to a bug to fix or functionality to add or change -- and if I see something else that here that can be (demonstrably) improved whilst I'm here, and then make a case for doing so.

    Hear, hear!

    My point-of-view is admittedly altered by being the consultant who is called-in to (re-)evaluate present state and to (re-)plan future state on projects which are presently “on fire,” or, as the case may be, “smoking [ruins].”   One of the things that the client asks is ... “can we simply get back to the stable-state where we used to be, and proceed forward (older but wiser) fom there?”   In order to give a meaningful answer to the question, I look at [look for ...] the change-order log and the associated [try somehow to associate it with ...] the git or svn commit and branch history.

    What I find, way-y-y-y too often, is that there really is no correspondence between the two.   “A single commit” does not correspond to “the remedy to that service-order, no more and no less.”   Far too often, the developer found something that smelled bad [to him ...] and “simply fixed it,” and didn’t tell anyone.   Didn’t structure the change so that it could be backed-out.   And didn’t update the validation test-suite (which should have detected any regression), because there wasn’t one.   The (now mostly-departed) team gave only lip-service to testing because it took time away from the secret Ruby re-write making Kewel New Fee-Churs.   In any case, no management was guarding the hen-house.   Management simply decided that the programmers were un-manageable anyway trusted the programmers to know what they were doing, and did not realize that those programmers were flying a 747 by the seat of their pants ... were making it up as they went along ... didn’t know what they were doing, either.

    Testing, to me, is simply one of several expressions of discipline.   Way, way too many programmers out there have no discipline at all.   They were taught “how to write source-code,” not how to build robust, maintainable, software machines that must safely carry passengers without a pilot or co-pilot on board.   To their training and experience, “source code” is “the end,” not “the means to a different end.”

    And, if you asked me where the “fabled disconnect” comes, between programmers and management, that would be my reply.   To a classic software developer, everything is software.

      The text you quoted does not appear in the node you replied to.

      Why do you repeatedly reply to the wrong post? Or is this another PerlMonks site "bug"?

      Downvoted!

      Not because I disagree with you; but because of your inane, facile, puerile, snide, underhand and utterly deliberate practice of posting a reply to a particular node; as a response to {some other} randomly chosen node.

      What the .... do you think it achieves? (Paraphrase:Why are you such a deliberate, willful moron?)


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
      In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked
        It achieves the goal of making you expend energy against the troll, while they sit back and admire their handiwork :)
        The post is more visible because after a given response depth they are hidden. Being everywhere is the best you can do to sell your work when the content of those posts should close every door.