|Syntactic Confectionery Delight|
Re: Re: Re: Re: Re: Developer Accountabilityby BrowserUk (Pope)
|on Apr 30, 2003 at 20:51 UTC||Need Help??|
Sorry if my previous post or this one come across as trying to teach my grandmother to suck eggs, but in the absence of any way of knowing to whom, and at what level I am corresponding, I find myself spelling out everything in an attempt to avoid confusion and cross-purposes. Even then, it would seem that I am failing dismally:( My apologies in advance to all for the blatent use of over emphasis in the next sentence, but I wish to make my feeling on this subject very clear.
I am in no way advocating strong typing at the language level.
I don't like it, I don't want it, I won't use it, no how, no way, no sir-ree, thankyou, deputy dog:)
What I am advocating is value checking at the interface level. An example. If you've ever used Win32::API, you'll see that there are exactly 5 "types" allowed to designate the types of the parameters passed to the OS calls invoked. If you have any passing familarity with the OS in question, you'll know that at the C/C++ source level, there are a gazillion types defined for these parameters. And if you've ever written any C/C++ source to use these APIs, then you'll know that such code is littered with explicit casting of one type to another to simply transfer the output from one API to the input of the next. This comes from the practice of defining a new, different type to describe bloody nearly every parameter to every API. A practice that I find abhorent, objectionable and a totally worthless waste of the programmers fingers. Underlying most of these defined types is the DWORD or UINT types, which are simply a 32-bit (unsigned) integers, but that will probably become 64-bit integers when the codebase is moved to 64-bit processors. The existance of these types doesn't restrict the range of values that can be passed, even when that range is defined and could be checked.
Unlike the Pascal style runtime range checking where it was possible to define a type in terms of the range of values it could take(I'd cite some examples but I can't remember the syntax, it has been 20 years since I did any Pascal). The C/C++ type mechanism concentrates only on the machine representation of the value. This is distinctly un-useful in most cases and is something that the compiler is more than capable of doing on behalf of the programmer, as is clearly demonstrated by perl's use of polymorphic typing, and the rarity with which it creates any problems.
The only effort required on the part of the programmer in all of this, is that of specifying the ranges of acceptable values for some or all of the input parameters. This would probably be most easily achieved through the use of assertions applied to the formal parameters of functions and methods. Essentially this is the same technique as using Test::More and its brethren. The syntax used would obviously vary depending upon the language being used to write the component, but in perl 5 terms it might look some thing like
The idea is that the fail directive/keyword/assertion, would cause the compiler to construct a call-frame, stack or register or VM based as required by the language/processor combination, that would perform the assertion at runtime (NOTE: no mention of types or static checking). It wouldn't care whether the value passed in for the first parameter came as byte/char/word/dword/unsigned/signed/long/float/ or string, the only thing that would be checked is the value was in range. For this polymorphism to work, when the compiler/interpreter is doing it thing and encounters a function call, it would look up the protoype for that function, and perform whatever context conversions are required to put the parameters supplied into the correct form in the call-frame. In effect, this is just taking the perl concept of contexts a stage further.
The final step of the process would be to standardise the mechanism and move it into the hardware so that when the function is compiled, one or a sequence of instructions is emited by the compiler at the prologue of the function object code that performs the assertions each time the function is called. The compiler/interpreter arranges the actual parameters into the call-frame, performing any conversions required by the called functions prototype, and then calls the function in the normal way. All perfectly possible to in software, at the source code level, or at the code generation level, but also possible to do at the hardware level. The difference is that once hardware standards are established, a momentum ensues that means that software not making use of it rapidly becomes deprecated. It is also much easier to verify the hardware once, than to verify every piece of software that will ever run on a given piece of hardware. Verifying that compilers and interpreters use the hardware correctly is a much smaller task than verifying every program complied or interpreted by them.
It's a blue sky idea and a lofty goal, but as they say in rugby, aim high:) It's also far from fully thought out, but then I ain't an hardware designer. I do think that I've thought the software side of the notion through fairly well, and I don't believe that it would impose any huge burden upon the programmer. For one thing, in the true perl "noones holding a shotgun at your head" style, if the fail keyword was not used, then the verification code would not be generated and nothing would be checked. It's also requires due care on the part of the programmer, as if they simple specify -MAX_INT .. +MAXINT for every numeric parameter, and /\x00-\xff*/ for every string, then nothing is checked either.
However, if the facility did exist, then as a consumer of components or a user of CPAN, I would be very skeptical of modules that didn't use it. I would want to see a comment detailing very good reasons why it was not being used.
In fact, I almost always find it easier to place guard conditions at the top of a routine, and then code the rest of it in the knowledge that if the inputs aren't valid, then I'll never reach here, than to try and handle possibly failure modes on the fly.
Examine what is said, not who speaks.
1) When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
2) The only way of discovering the limits of the possible is to venture a little way past them into the impossible
3) Any sufficiently advanced technology is indistinguishable from magic.
Arthur C. Clarke.