http://www.perlmonks.org?node_id=536404


in reply to Versions Thought

The most common version number for a brand new module is 0.0.1.

Most new modules are not release quality but just alpha or beta quality. Releasing a new module with a 1.xx version number is a bad idea, even if it has been extensively tested in-house, because it can have problems with its API that would remain undiscovered until somebody starts using it for something the developer never though.

In my opinion it is a better idea to use a 0.xx version number that says, try it, tell me what you think, and then I would try to correct the problems to finally release something stable.

Replies are listed 'Best First'.
Re^2: Versions Thought
by BrowserUk (Patriarch) on Mar 14, 2006 at 10:34 UTC
    Releasing a new module with a 1.xx version number is a bad idea, even if it has been extensively tested in-house, because it can have problems with its API that would remain undiscovered until somebody starts using it for something the developer never though.

    If the module has been "extensively tested in-house", then the interface as-is is worthy of release.

    But that partly reflects my missive that nothing is tested until it has been used. That is, no amount of potted test suites, unit tests or other non-use testing are a substitute for using the code for real. And no interface or code can be described as having been extensively tested, until a real(istic) application has been written using it, and (in most cases), by a developer who is not a part of the development team.

    Of course this gets fuzzy with one-man, and even one-team development shops, but the basic principle holds. Don't develop librabries or interfaces in the absence of a real use-case and a real application that meets that use-case.

    By that measure, if the interface/library have been used successfully for a use-case, and it meets the requirements of that use-case, then it is worthy for release. Others may have a sufficiently similar use-case that it will meet their requirements.

    They may also discover requirements and ommisions that would allow the module to be (more easily or more completely), adapted to their use-case and if they do, and if those requirements do not impact the original use, then extensions to the interface may be in order. They may also feed back that for their use-case the current interface is inadaquate.

    The original developers may see benefits, to their current application, or future applications, or just the general usability and effectiveness of the module, and choose to adapt it and release a new interface; but the original use-case may continue to be perfectly sufficient as-is, and not require adaption to the new interface until that happy state changes.

    Even after the new interface is released, the original interface may be sufficient and complete for some new applications, and should continue to be mainatained and supported until it is evident that it uneconomic (an ethereal judgement with free software) to do so. One real benefit that falls out of f/oss software is that if the original developers choose to drop support for an older release, it is possible for users for whom that old release is all they need, to avoid the costs of upgrading and take on the maintainence themselves.

    What it comes down to (IMO), is that code should be developed to meet a (real) requirement, not a hypothetical one. And it should not be released until it has proven useful in meeting that real requirement. Only at that point is the interface "proven", and therefore likely to be useful to others.

    Code written to meet a set of theoretical goals and requirements, no matter how well coded, tested and documented, has yet to pass the fundemental goal of solving a real problem.

    Every project I've ever been involved with (at any level), that was written to a designer's view of what might be needed, rather than a "customers" needs, has expended huge amounts of time and money solving the wrong problems and invariably missed the boat when it comes to solving the real problems.

    And every library I've ever used that was written to meets the possible, maybe, nice-to-have, future requirements of some theoretical problem has invariably been a pig of an interface to work with when you try to use it solve real problems, now.

    Indeed, I've arrived at a credo that says solve the real problems now, and deal with future problems as and when they arise. A programmer should of course avoid coding arbitrary limits tailored to meet just the current requirements--don't use a fixed-sized static array because it's easier to code than a dynamically allocated one. That will always come back to bite you :).

    But neither should he speculate too much about what might be needed at some unspecified point in the future, and over complicate the code to accomodate that possibility. By the time that eventuality arises, there will invariably be 20 other good reasons for refactoring the code, and the chances are a better solution will fall out of that reengineering to solve that, now current need, than you could have achieved at the time when you thought it might be a good idea.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.