Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl-Sensitive Sunglasses
 
PerlMonks  

Re: Re: Developer Accountability

by Anonymous Monk
on Apr 30, 2003 at 13:23 UTC ( [id://254266]=note: print w/replies, xml ) Need Help??


in reply to Re: Developer Accountability
in thread Developer Accountability

You are trying to solve the wrong problem.

Fine. We now have a way to rewrite software to be able to crash it immediately if a function is called incorrectly. We can't guarantee that the function works as documented, or that the caller understood the documentation and is calling the right function. We can't guarantee that caller or callee is writing to a spec that remotely resembles what the user thought would happen. And we don't have a solution for the problem where you get half of your code written and then find out that you need to change the spec.

The fact is that software development is all about management of complexity. Telling developers to accept a level of complexity across the board that may or may not help them much is more likely to compound the problem than to assist it.

Replies are listed 'Best First'.
Re: Re: Re: Developer Accountability
by BrowserUk (Patriarch) on Apr 30, 2003 at 16:34 UTC

    You are trying to solve the wrong problem.

    Obviously,I disagree. As I indicated, my own experience that acheiving maturity in software is the key to reliability, hence the (now old) adage that you should never use a x.0 release of anything for production purposes. And the key to maturity comes from the ability to re-use components. Decoupling of components through clearly defined and enforced interface contracts removes many of the barriers to code re-use.

    We now have a way to rewrite software to be able to crash it immediately if a function is called incorrectly.

    Put that way it doesn't sound so useful, but it is.

    By "crashing immediately", it becomes painfully obvious that the program in incorrect. Its when coding errors are not detected that they make it out the door into production systems that cause most harm.

    Failing reliably and early is the main benefit of using strict.

    We can't guarantee that the function works as documented

    That guarentee comes from maturity. The more often and the more widely a function is used, the more likely that any errors in the functionality will be detected and corrected. The maturity comes from reuse. I think this is the key to open-source developments success.

    or that the caller understood the documentation and is calling the right function.

    Clearly defined interfaces and enforcement result in early failure forcing the developer to discover his error sooner rather than later.

    We can't guarantee that caller or callee is writing to a spec that remotely resembles what the user thought would happen.And we don't have a solution for the problem where you get half of your code written and then find out that you need to change the spec.

    That's a mixed metaphor (without the metaphor:), and a different problem. This is more a bespoke software than a packaged software problem.

    You can't fixed a bad specification at the code level. You either code to the spec and hope the spec is right, or you throw away the spec and the analysis (and the analyst while your at it) and use RAD to get the user to direct and approve the code step by step.

    That doesn't rule out the practice of code re-use, nor the benefits derived from it. In fact, using RAD in conjunction with a readily available set of components is far and away the cheapest and quickest way of doing bespoke development. Enabling the developer to concentrate on picking and gluing the components together rather than needing to design and code the infrastructure from scratch each time.

    This is exactly what HLL's are all about and Perl + CPAN in particular. The only fly in this ointment is that lack of consistancy in the quality of the documentation and code... (I'd also add a lack of coherence in overall design at the CPAN level, but that's a personal windmill).

    The fact is that software development is all about management of complexity.

    No argument there. And the key to managing complexity is "divide and conquor", "decomposition", "de-coupling", "objectifying", "componentisation". Call it what you will. The solution to complexity is to break it into smaller, soluble pieces.

    The problem is, the more complex the problem, the more pieces are required. The more pieces, the more interfaces there are. These interfaces then become a part of the problem. Take a box of tried a trusted components and try to slot them together to make a complex entity and it's the connections that are the key. In the physical world, trying to screw a 3/4" bolt into a 5/8" nut makes getting it wrong pretty hard.

    Every electrical connector these days is designed such that it will only connect one way. Do electricians, electronics engineers and technicians feel slighted by this idiot-proofing? No. They are grateful that another source of potential errors is removed from their daily lives.

    Telling developers to accept a level of complexity across the board that may or may not help them much is more likely to compound the problem than to assist it.

    I do not understand this assertion at all? I don't know where you are coming from or how you got there.

    Summary

    What I am advocating is the software equivalent of the one-way-only, keyed connector.

    I doesn't alleviate the need for proper design, nor reduce the complexity of complex systems, nor free the developer from due care.

    What it does do is enable complexity management through decomposition, and reliability through code re-use, by fool-proofing the connections between components, thus removing one of the perpetual problems of software development by moving it into the hardware.


    Examine what is said, not who speaks.
    1) When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
    2) The only way of discovering the limits of the possible is to venture a little way past them into the impossible
    3) Any sufficiently advanced technology is indistinguishable from magic.
    Arthur C. Clarke.
      I think that we are talking past each other. Yes, I understand very well the benefits of using mature components, the role of time and testing, the benefits of catching errors relatively early. These things are all good in theory, everything is good in theory, if you can just do it right and consistently.

      The problem is that sometimes the Good Stuff™ comes at a cost. And that cost is that it creates extra work and more verbose code, which gets in the way of development. Sometimes the trade-off is worthwhile, and sometimes not. And often different people disagree on where the trade-off is justified.

      See, for instance, the interminable debates on whether or not it is better to have dynamic typing or static typing. If you don't see why someone would possibly want to forgo the benefit of type-checks, then I suggest arguing with merlyn for a while...

        Sorry if my previous post or this one come across as trying to teach my grandmother to suck eggs, but in the absence of any way of knowing to whom, and at what level I am corresponding, I find myself spelling out everything in an attempt to avoid confusion and cross-purposes. Even then, it would seem that I am failing dismally:( My apologies in advance to all for the blatent use of over emphasis in the next sentence, but I wish to make my feeling on this subject very clear.

        I am in no way advocating strong typing at the language level.

        I don't like it, I don't want it, I won't use it, no how, no way, no sir-ree, thankyou, deputy dog:)

        What I am advocating is value checking at the interface level. An example. If you've ever used Win32::API, you'll see that there are exactly 5 "types" allowed to designate the types of the parameters passed to the OS calls invoked. If you have any passing familarity with the OS in question, you'll know that at the C/C++ source level, there are a gazillion types defined for these parameters. And if you've ever written any C/C++ source to use these APIs, then you'll know that such code is littered with explicit casting of one type to another to simply transfer the output from one API to the input of the next. This comes from the practice of defining a new, different type to describe bloody nearly every parameter to every API. A practice that I find abhorent, objectionable and a totally worthless waste of the programmers fingers. Underlying most of these defined types is the DWORD or UINT types, which are simply a 32-bit (unsigned) integers, but that will probably become 64-bit integers when the codebase is moved to 64-bit processors. The existance of these types doesn't restrict the range of values that can be passed, even when that range is defined and could be checked.

        Unlike the Pascal style runtime range checking where it was possible to define a type in terms of the range of values it could take(I'd cite some examples but I can't remember the syntax, it has been 20 years since I did any Pascal). The C/C++ type mechanism concentrates only on the machine representation of the value. This is distinctly un-useful in most cases and is something that the compiler is more than capable of doing on behalf of the programmer, as is clearly demonstrated by perl's use of polymorphic typing, and the rarity with which it creates any problems.

        The only effort required on the part of the programmer in all of this, is that of specifying the ranges of acceptable values for some or all of the input parameters. This would probably be most easily achieved through the use of assertions applied to the formal parameters of functions and methods. Essentially this is the same technique as using Test::More and its brethren. The syntax used would obviously vary depending upon the language being used to write the component, but in perl 5 terms it might look some thing like

        fail 'Bad parameters' unless @_=3 and $_[0] >=0 && $_[0] <32767 and exists $valid_cmd{$_[1]} and $_[2]->is_writable;

        Kind of crude, but the the fail keyword could perhaps be a synonym for die at the simplist level. I wish I was confident enough to try for a Perl 6 style syntax but I ain't.

        The idea is that the fail directive/keyword/assertion, would cause the compiler to construct a call-frame, stack or register or VM based as required by the language/processor combination, that would perform the assertion at runtime (NOTE: no mention of types or static checking). It wouldn't care whether the value passed in for the first parameter came as byte/char/word/dword/unsigned/signed/long/float/ or string, the only thing that would be checked is the value was in range. For this polymorphism to work, when the compiler/interpreter is doing it thing and encounters a function call, it would look up the protoype for that function, and perform whatever context conversions are required to put the parameters supplied into the correct form in the call-frame. In effect, this is just taking the perl concept of contexts a stage further.

        The final step of the process would be to standardise the mechanism and move it into the hardware so that when the function is compiled, one or a sequence of instructions is emited by the compiler at the prologue of the function object code that performs the assertions each time the function is called. The compiler/interpreter arranges the actual parameters into the call-frame, performing any conversions required by the called functions prototype, and then calls the function in the normal way. All perfectly possible to in software, at the source code level, or at the code generation level, but also possible to do at the hardware level. The difference is that once hardware standards are established, a momentum ensues that means that software not making use of it rapidly becomes deprecated. It is also much easier to verify the hardware once, than to verify every piece of software that will ever run on a given piece of hardware. Verifying that compilers and interpreters use the hardware correctly is a much smaller task than verifying every program complied or interpreted by them.

        It's a blue sky idea and a lofty goal, but as they say in rugby, aim high:) It's also far from fully thought out, but then I ain't an hardware designer. I do think that I've thought the software side of the notion through fairly well, and I don't believe that it would impose any huge burden upon the programmer. For one thing, in the true perl "noones holding a shotgun at your head" style, if the fail keyword was not used, then the verification code would not be generated and nothing would be checked. It's also requires due care on the part of the programmer, as if they simple specify -MAX_INT .. +MAXINT for every numeric parameter, and /\x00-\xff*/ for every string, then nothing is checked either.

        However, if the facility did exist, then as a consumer of components or a user of CPAN, I would be very skeptical of modules that didn't use it. I would want to see a comment detailing very good reasons why it was not being used.

        In fact, I almost always find it easier to place guard conditions at the top of a routine, and then code the rest of it in the knowledge that if the inputs aren't valid, then I'll never reach here, than to try and handle possibly failure modes on the fly.

        Having just re-read that last sentence, maybe the keyword guard is a more descriptive choice than fail.


        Examine what is said, not who speaks.
        1) When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
        2) The only way of discovering the limits of the possible is to venture a little way past them into the impossible
        3) Any sufficiently advanced technology is indistinguishable from magic.
        Arthur C. Clarke.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://254266]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others chilling in the Monastery: (4)
As of 2024-03-28 23:18 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found