Beefy Boxes and Bandwidth Generously Provided by pair Networks vroom
XP is just a number
 
PerlMonks  

Re^5: Informal Poll: Traits (Long Post Warning!)

by BrowserUk (Patriarch)
on Nov 19, 2005 at 23:51 UTC ( [id://510196]=note: print w/replies, xml ) Need Help??

This is an archived low-energy page for bots and other anonmyous visitors. Please sign up if you are a human and want to interact.


in reply to Re^4: Informal Poll: why aren't you using traits?
in thread Informal Poll: why aren't you using traits?

Okay. First a few general clarifications about my original reply.

  • Ovid's phrasing of his question permitted my response which was meant to have been summarised by:

    Whilst I appreciate the need for some trait-like mechanism, there are currently many competing visions for how that mechanism should operate at the semantic level; and few, if any practical implementations of any of them.

    Until the field of proposals thins out, through consolidation or lack of interest, and it is possible to compare practical implementations of those that remain side-by-side, and evaluate them from both the practicality of their semantics in use, and the impact they have on performance, I am not ready to expend effort trying to decide which of the visions is the 'Right One'.

  • My rather less than brief foray into describing the limitations of SI, and particularly the problems with MI:
    • Was not meant to be authoritative.
    • Was a generalisation of the problems as I have seen, read and experienced them.

      That inevitably limits it to some mix of what I know, or rather what I think I know, about some aspects of MI languages I have used.

      So that limits it to

      • C++: which I last used roughly 10 years ago. Things will have moved on. I hated it anyway.
      • Smalltalk: Last used in anger even further back in the form of SmallTalk/V PM by Digitalk Inc. It ran like a dog back then.

        I've more recently played with Smalltalk Express from Objectshare, which I think is pretty much the same as Smalltalk V above in terms of the technology employed internally. Of course, it runs much more quickly now, on a 2.4 GHz P4 machine, than it did back then on a 20MHz 386, or the 40 MHz P1 that were the current top of the range when I was using it professionally.

        I've got a copy of Visual Age Smalltalk, which I have heard good things about. It is a trial edition on a CD I picked up at a trade show somewhere. Maybe I should dig it out and take a look one day soon.

      • Eiffel: Actually a slightly cut down version of the original developed for OU courses called Eiffel-s.

        I did little with this as I was only acting as a teaching assistant to someone taking an OU course that called for it. I did acquire my appreciation of Design by Contract from it though.

      • Oh, and Perl I guess, though I have never had call to write any extensively MI code in Perl.

      I don't recall ever having heard of Dylan; I didn't know what the acronym CLOS stood for, though I have seen it come up by reference a few times; despite 3 serious attempts I've never got to grips with LISP.

      LISP is one of two languages that I think I ought to "know"; that I ought to "have used in earnest". Unfortunately, I can never get past the syntax--it is just to tedious for words. The other is Haskell, which despite my best effort, I have a similar problem with. I won't justify that any further, nor respond to critisism of it. Some languages just gell for me and other do not.

      I have used Forth more extensively a long time ago, and I believe that, back then anyway, that both languages, (Forth and Lisp), featured similar threaded-interpreted-bytecode as the basis of their implementation. Again, things have moved on.

    • Was meant to serve to indicate that I had (at least some) understanding of the premise underlying Traits and trait-like mechanisms and reinforce my acceptance of the need for them.

    Which brings me to my last general point.

  • What you know, or think you know, is only as good as your last involvement/experience/investigation of the subject.

    I said more on that How do I know what I 'know' is right? and I won't repeat it, but as a general form of disclaimer, unless every post here is going to be treated as a form of scientific paper, thoroughly researched over weeks and bringing in every available scrap of bleeding edge research, then it is inevitable that sometimes the information within the posts here will reflect "old knowledge".

    As I have also said before, my reason for frequenting this place is to learn! If along the way I impart some useful knowledge to others that's a nice bonus, but my reasons are purely self-interest: I want to learn.

    And one of the best ways I know of learning is to converse with other who are more knowledgeable than oneself. And that means exposing ones opinions, thoughts, assumptions and dogmas to the cold light of scrutiny by those more knowledgeable people.

    Through a quirk of fate I arrived at the Monastery Gates and discovered a place with a rare mix of intelligent, thoughtful and experienced people with an above average tolerance for those less knowledgeable seeking enlightenment. Way above average. I stuck around.

    The greatest pleasure in life, beyond the satisfaction of carnal desires, is the feeling that I have learnt something. So when someone with greater knowledge than myself comes along and educates me, I am "over the moon Brian!". Thanking you.

So now to address (some) of the points you raise in your responses. Anything I don't touch on, means I accept your correction, or greater knowledge.

  • I think that we have an impedance match on our interpretation of the phrase "dynamic language".

    I chose the phrase "dynamic languages" as a way of avoiding the debate about what constitutes a compiled versus interpreted language.

    At various points in your posts you state or imply that your interpretation of a "dynamic language" includes, amongst others, C++. Standard ML and Haskell. That these have (some degree of) dynamism to them. This is where we differ.

    As far as I am aware, none of these languages can

    • create classes at runtime (ART);

      Perl5 can require or unfeeze or eval classes into existence.

      Perl6 will, (if my memory holds good), create a new class if an instance of an existing class is modified by the runtime addition of a method, or application of a mix in. The existing class remains as was, but the modified instance becomes an instance of a new duplicated&modified class.

    • create methods ART;

      As above, both Perl 5 and Perl 6, and also (I believe) Python and Ruby can do this through direct or indirect means.

    • reassign instances ART;

      In Perl 5 terms, re-bless.

    • There are other ART modifications to the type system, and inheritance tree that can come about.

    In essence, the defining characteristic of what I mean when I refer to "dynamic languages" is the ability to eval code into existence.

    Unless things have moved on markedly from where they were when I was last current with the proposed state of Parrot, this gets even more complicated, with any one language being able to act on and modify the state of the inheritance trees of all the other supported languages.

    It is, or at least was, proposed that Perl could use Python library code and vice versa. And even that each could eval code of the other language into existence at runtime.

    As far as I am aware, none of the (what I would term true compiled languages), for which I will cite C++, Haskell, O'Caml can achieve this. Their dynamic aspects are constrained to compile-time creation of classes, with the possibility of classes being generated via meta-programming through the use of templates (and now traits) in C++; Template Haskell in Haskell; and Meta-O'Caml in O'Caml. I'm not sufficiently up on Standard ML, CLOS, LISP and others to comment.

    The only runtime dynamism that these languages have, (again, as far as I am aware), comes in the form of runtime decision branching on the basis of introspection.

    if classof( object) do_this(); else do_that(); end; if classof( object).has_method( X ) ....

    though the syntax will vary, sometimes extremely.

    The basic restraint is that the introspection is limited to essentially read-only state, and both branches of any decision that can be made as a result of introspection, have to be in-place at compile time for type checking, type inferencing, etc.

    These are what I would call "static languages" as all possible branch points, and the code they invoke, is known, "statically" defined, by the time compilation is complete.

    So, for the purposes of discussion, my definition of "dynamic languages" would include Perl, Python, Ruby, Lua, and Tcl and Smalltalk as I know it.

    It would not include C, C++, ML, O'Caml, Haskell.

    I cannot comment upon LISP, CLOS or DYLAN as I have no idea whether these would constitute static (true compiled) or dynamic (compiled to byte code and interpreted) languages according to my definition.

    If LISP is so dynamic, and yet also so fast, I would like to understand how it achieves that.

  • With respect to my use of the term 'vtable'.

    I was using this as a generic term, that most people might be familiar with. for the process that requires a language to go through an (at least) two stage process of lookup when a method invocation is encountered within a piece of code.

    1. Find the class of the object to which this method is being applied.
    2. Find the address of the code that implements this method within that class.

    This is pretty much true for all OO languages, whether it is termed method invocation, message passing or anything else. However, where is differs between two distinct classes of language is the timing of that lookup.

    I chose to use the term vtable for this, but you can call it symbol table lookup, or hash lookup or in the case of the Spineless-tagless G-machine, I think it is termed the "info-table" lookup (but don't quote me on it :).

    The point is that there has to be a lookup done somewhere, and I chose to term that process using the term I am, (and perhaps many other people are), most familiar with.

    So, what is really important about method resolution is when it is done, and how long it takes.

    • In languages like C++, Haskell, O'Caml--those I term static-compiled, or fully pre-compiled or non-dynamic--this lookup resolution is done entirely at compile time.

      And compilation is done once. And done before the program is ever loaded or run.

      In these languages, it matters not, (a jot), how long compilation takes, a good thing in Haskell's case for some complicated programs (like the Haskell compiler itself!). The user never sees, nor has to wait for, that compilation.

      Some languages, like Haskell and O'Caml, make full use of this off-line time to extraordinary effect in producing immensely fast executables and provide extremely useful language features like lazy evaluation, lazy lists, etc. They do this (without expressing any expertise on these compilers), by analysing the begeebers out of the code to the point where, by the time you get to run the code, it has, (almost), been reduced to a series of lookup tables and a few branch points.

      That's a way over simplification of the process, but the point is, by the time methods are actually invoked, they have been reduced to a single level lookup that only requires the code to actually be run the first time it is encountered and from that point on the value is just substituted for the call. All "class lookups", "method resolutions" and related processing is completed before the program is ever loaded.

      Subroutine (method) caching (optimisation) is not only built into the compilation cycle, it is pretty nearly, if not completely, ubiquitous.

    • Contrast the above with the situation in Perl et al. My "dynamic languages".

      In the absence of the ability to load pre-compiled byte code, not only does whatever time is spent performing class and method resolution impact the user every time they run the code, the fact that it does imposes limitations on how much time can be spent optimising the results.

      Even with the ability to load pre-compiled byte code, the imperative to allow that byte code to run anywhere, means that final conversion to machine code (JIT) must be delayed until load time. And, unless you are going to give up on modularisation, relocation fix ups also have to be done at this time.

      And then, you can only fully reduce the lookups within in any given class heterarchy to a single level if the language accepts and enshrines the notion that all classes are closed at runtime!. To quote from the Dylan document:

      ... a monotonic linearization enables some compile-time method selection that would otherwise be impossible in the absence of a closed-world assumption.

      Without that "closed-world assumption", you have to have at least one decision point at each method lookup:

      if( classIsClosed ) { lookup the address of the code in this class, fix up the parameter +s and invoke it. } else { Perform a full search of the inheritance heterarchy to locate the +(appropriate) method provider (possibly with conflict arbitration). Fix up the parameters, (with possible further class and method res +olution cycles. Invoke the code. }

      Not only does that non-closed branch of that take longer, it also requires that all the support structures and mechanisms be in-place, regardless of whether it is ever used in any given program.

      That means at least a flag per class to indicate closed ness. It also means that one must retain a lookup table of some kind that maps class names to other lookup tables that map method names to methods, and retains the superclass hierarchy and search pattern from this point forward.

      In a fully pre-compiled language, much of this information can be discarded at compile time.

    So the question comes, is the "closed-world assumption" true for the languages we care about?

    In the case of Perl 5, I would say that it is most definitely not.

    Perl 6? I seem to recall that there was mention of the ability to mark a class as closed. Which as far as I can see, means that non-closed classes are possible. And with that conclusion comes the requirement for all the support structures and code to provide for that possibility.

  • Which bring me to my discussion regarding performance; my second post in response to Aristotle; and many of the points you raise in your response to it.

    I took Aristotle's response, (I think correctly), to mean that if Traits, (or one of the other mechanisms), solves the problem with MI, then surely performance is but a secondary consideration.

    You ask, in your second post:

    only if you are writing performance critical things ... if you are ... then why the f*** are you writing it in Perl ;)

    I like Perl. I like the ethos, permissiveness, conciseness and productivity it gives me. Given the choice, I would use Perl in preference to any other language with which I am familiar. That's quite a long list, though it does have it's holes.

    Love it's freedoms; know it's limitations.

    One of the limitations of Perl (and other "dynamic languages" according to my definition), is performance.

    Besides the existence of empirical evidence, there is more practical evidence of the performance limitations of Perl, from which I will draw one quote:

    The primary advantages of mod_perl are power and speed.

    The very existence of these solutions, is a strong indicator of a problem.

    So, with respect, the need for speed goes way beyond the bleeding edge of "video games" development, or the esoterics of " Nuclear Missile Guidance systems".

    A regular question that arises here at PM is "How do I keep the browser user informed, whilst I generate X in the background?".

    Wouldn't it be nice if you could generate your charts or summarise your data, or search your in-memory DB quickly enough that you didn't need to keep the user appraised of the delay?

    Yes, using Perl et al. is a conscious decision that we make, trading raw performance for programmer productivity.

    Yes, we can throw hardware at the problem to circumvent the need to move to another language for data hungry and/or CPU hungry processes.

    Yes, we can drop into XS, or Inline C, or PDL or Math::Pari to mitigate localised performance hits.

    But wouldn't it be nice to avoid all of these expediencies?

    I have never known the situation in 25 years--except for the occasional old video game being run on new processors where the timing loop ran so fast that they became unplayable--where a user has complained that their program "runs too fast". Everybody likes it when the programs they use run quickly.

    Not at the expense of correctness; or usability; or "good design" or maintainability--though the significance of those last two depends very much who you are.

    Most users do not care a fig for how hard it is to maintain the software they use, they are only interested that it does what it is meant to do, correctly, with as little effort on their behalf as possible, and as quickly possible. Their time is money, just as the programmers is.

    Programmers may be unique in the effect that their decisions can have upon the daily lives of millions of people.

    "This would run faster if I accessed the instance data directly, but it will be a whole lot easier for me, or one of my fellow programmers, to modify, should that need ever arise in the future, if I indirect the accesses through setters and getters."

    And a few hundred thousand people around the world every day, wait a second or two longer every time they use the application or web site that class is a part of.

    Programmers aren't the only ones who allow self-serving decisions to affect their customers; but they are one of the few groups who's decisions can quickly affect large numbers of people; and one of the very few groups that do it "just in case".

    If the compiler and caching technology has reached sufficient state of advance that all the features mooted for Perl 6 can be accommodated and still allow for a sufficient level of performance when those features are not employed, then I am all for them.

    But most of what I have read regarding the development of implementations of trait-like mechanisms, is being done with languages that are fully pre-compiled producing static executables with (mostly) static class heterarchies, and read-only introspection capabilities.

    Parrot is meant to be going to produce distributable, pre-compiled binaries that will require only load-time relocation fix-ups and maybe that will be the saving grace.

    Still, all the features mooted for inclusion in Perl 6, (assuming they are not deferred), that will impose not just runtime hits if they are used, but also the necessity to architect the entire language implementation in order to accommodate their use--think Perl 5 and threads and the performance hit compiling with MULTIPLICITY USE_ITHREADS in 5.8.x has compared to without, or to 5.6.2!--worries me.

  • Search patterns, breadth first/ depth first etc.

    This is almost an aside, but you did bring this up in both posts and in the first one, attributed the "proposal" to me.

    This was in no way anything I was proposing.

    I was alluding to proposals that I vaguely recalled from watching the Perl6/Parrot lists go by where there were keywords being mooted to provide for the programmer to specify the inheritance tree search ordering. I vaguely remember trying to look up some term that came up within these discussion--something like the "New York Method" or "City Block Method"?--and failing to find an explanation.

    I just spent an inordinate amount of time trying to relocate those list discussion and failed miserably (though I saw your name come up a lot in later, similar threads!).

    I had just about given up when a search turned up this page of Apocalypse 12. And there, right under the first paragraph heading near the top of the page are the following keywords:

    :canonical # canonical dispatch order :ascendant # most-derived first, like destruction order :descendant # least-derived first, like construction order :preorder # like Perl 5 dispatch :breadth # like multimethod dispatch and some that specify selection criteria: :super # only immediate parent classes :method(Str) # only classes containing method declaration :omit(Selector) # only classes that don't match selector :include(Selector) # only classes that match selector

    Now maybe I was (am) mixed up about what use was proposed for these keywords, or maybe that proposal has been changed or dropped, but I was not imagining that there was something relating to this, and I was definitely not proposing it.

The upshot is, that I want Perl 6 to succeed--for purely selfish reasons.

I want to be able to program everything in Perl. Well, maybe not Nuclear Missiles or video games, but as far as possible everything else. I don't want to have to resort to it's equivalent of XS or Inline C. I don't want to have to make use of libraries like GD and PDL and Math::Pari to achieve a reasonable performance for CPU intensive work. I know that C or pretty much any fully pre-compiled language will be faster than Perl for these tasks, but so what? With the expenditure of sufficient effort, I could do everything I now do in Perl, in Assembler. And it would be faster. That is not the point. What I want is for most everything I routinely (and even occasionally) do by using other languages, even in part, to achieve reasonable performance, directly in Perl.

That's a lot of wants, and a high goal, but I believe that Perl-like VHLL, dynamic, semi-compiled languages are the most productive, and I want to benefit from that productivity for as much of what I do as I can. What's more, I believe, (I'm beginning to sound like a Baptist preacher :), that much if not all of what I would like, is achievable. I just fear that if too many nice-to-have features are added into the (core of) the language, the need to support them will have an overly detrimental affect on what can be achieved.

In the light of TimToady's post in this thread, it look as if, through your influence or otherwise, my fears are unfounded. He does have a habit of making the right calls in these matters, so I will shut up and wait and see.

Relating this all back to the beginning and Ovid's post. I understand the need for Traits or one of the near-aliases of that term, but I fear that without seeing a live implementation in a Perlish language, with all of the dynamism that entails, that it's provision within the core of Perl 6 will inevitably be another foot on the brake of it's potential performance.

In Perl 5 terms, I think that any code using an implementation would have to be very unconcerned about performance to warrant it's use.

It may be that you have hit upon the mechanism for performing this entirely at compile time so that no runtime penalty ensues from it, but on the basis of what I have read, including those of the links from Class::Trait that worked, and the Dylan reference in particular, the requirement for a closed-world assumption seems to me to be in conflict with both Perl 5 and Perl 6. Maybe that can be mitigated without penalty in all but those cases where the assumption does not hold true, but as they say, seeing is believing.

History shows that the first few cuts of any new mechanism/algorithm can always be improved upon performance wise. Whether sorting, or FFT or hidden line removal or ray tracing or prime validation, the algorithms just seem to get faster and faster with each new cut.

Performance is not the only criteria, nor even the first criteria, but it is a criteria against which a language can be, and will be measured. And when adding features to the core of a language that potentially affect all programs that will be written in that language, whether they use the feature or not, you had best be sure that you pick the right semantics and the best algorithm.

And I am unsure yet whether Traits are either the best semantically, or the least likely to degrade performance, of the possible solutions to the problem they address.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^6: Informal Poll: Traits (Long Post Warning!)
by jeffguy (Sexton) on Nov 20, 2005 at 01:33 UTC
    BrowserUK,
    Off topic, I recommend (if you ever find time with all your studies) pursuing LISP, at least to the point where you see its power (the MANY uses of macros). I think you'd like it a lot. It is so completely customizeable, so you can make it whatever you want. It seems very perlish, but more-so. Except that in common lisp they often opted for long function names instead of short ones. But I've found that my ability to employ once-and-only-once to the extreme and to introduce new abstractions everywhere to simplify and shrink my code -- I've found all that makes up for the annoyingly long names. And it's fast.

    Anyway, here's a good (free) book that sprints through the language so it can get to all the coolnesses of macros. Some of the concepts are very different from what I'm used to, so it took FOREVER to get through some of those middle chapters. But the material I learned from the effort was worth it.
    http://paulgraham.com/onlisp.html

    Thanks for all the enlightening posts. They've been a joy to read (if a bit long ;-)
Re^6: Informal Poll: Traits (Long Post Warning!)
by stvn (Monsignor) on Nov 20, 2005 at 11:05 UTC
    BrowserUK

    Wow,.. nice post :)

    I think that we have an impedance match on our interpretation of the phrase "dynamic language".

    Yes, I agree, although not really that different. My criteria for a "dynamic" language, is more the ability to write agile code which can cope with dynamic requirements. This could include eval-ing code at runtime, but it also includes other language features such as polymorphism. For instance, I am currently reading about the Standard ML module system. SML is very often a rigorously staticically compiled language, but it's module system is built in such a way that it almost feels like a more dynamic language. This is because of Functors, which (if I understand them correctly) are essentially parametric modules whose parameters are specificed as module "signatures". When you call a functor, you then pass a "structure" that conforms to that "signature" and the functor then creates a new "structure" based on that (it is not all that different from C++ STL stuff actually). If you then combine the module system with ML's polymorphism, you can get a very high degree of dynamic behavior, while still being statically compiled.

    My point, static code analysis does not have to limit the dynamism in a language.

    I also want to quickly say that runtime introspection (read only, or read-write) is not (IMO) a criteria for dynamic languages. In fact in some languages, like SML, I think runtime introspection is just not needed. However, that said, I personally like runtime introspection in my OO :)

    If LISP is so dynamic, and yet also so fast, I would like to understand how it achieves that.

    I won't claim to be an expert on LISP compilation, because I truely have no idea about this. I do know that the only language still in use today as old as LISP is FORTRAN. Both of these langauge have blazingly fast compilers available probably for the simple reason that 40+ years of improvement has gone into them.

    As for how LISP is so dynamic, I think LISP macros have a lot to do with that. LISP has virtually no syntax (aside from all the parens), so whan you write LISP code, you are essentially writing an AST (abstract syntax tree). LISP macros are basically functions, executed at compile time, which take a partial AST as a parameter, and return another AST as a result. This goes far beyond the power of text substituion based macros. And of course, once all these macros are expanded in compile time, there are no runtime penalties.

    To be totally honest, I have written very little LISP/Scheme in my life. Most of my knowledge comes from "reading" it, rather than "writing" it. But with languages like LISP, I think more of the (real-world applicable) value actually comes from the "groking" of the language, and not the "using" of it. In other words, it is much easier to find work writing Perl than it is writing LISP, but knowing LISP can make me a better Perl programmer.

    With respect to my use of the term 'vtable'.

    <snip a buch of things related to static method lookup vs. dynamic method lookup>

    Much of what you say is true, but I think it has more to do with the design and implementation of the languages, and less to do with the underlying concepts.

    I beleive that static analysis can go a long way, and caching and memoization can take it even further, and whatevers left is probably so minimal I don't need to worry about it. The best results can be acheived by combining all the best practices into one.

    Will this work? I have no idea, but it's fun to try :)

    re: program efficiency vs. programmer efficiency

    I work for a consultancy which writes intranet applications for other businesses (we are basically sub-contractors). While performance is important (we usually have guidelines we must fall within, and we load test to make sure), these applications are long living (between 2-7 years). It is critical to the success of our business, and in turn to the success of our client's business that these applications are maintainable and extendable. Our end-users may not be anything more than periferally aware of this, and therefore seem not to care about it. However, those same end-users like hearing "yes" to their enhancement requests too. So while those end-users may not associate this with my use of OO, or trade-offs I made for readability, or time I spent writing up unit tests, they certainly would "feel" it if I didn't do that.

    My point is that, for some applications, and for some businesses, application performance is much lower on the list than things like correctness, flexibility and extendability.

    Search patterns, breadth first/ depth first etc.

    Yeah, I read that part in A12 as well, I think it is flat out insanity myself :) Nuff said.

    I vaguely remember trying to look up some term that came up within these discussion--something like the "New York Method" or "City Block Method"?--and failing to find an explanation.

    The name you are looking for is "Manhattan Distance". I am not that familiar with the algorithm myself, however, I surely (unknowingly) employed it many a times since I lived in NYC for a while :) Google can surely provide a better explaintation.

    And I am unsure yet whether Traits are either the best semantically, or the least likely to degrade performance, of the possible solutions to the problem they address.

    I am not 100% sure of this either, I like the sound of Traits/Roles, but you never know when something better might come along.

    -stvn
      Yeah, I read that part in A12 as well, I think it is flat out insanity myself :) Nuff said.
      I'd just like to point out that the passage you're quoting has almost nothing to do with standard dispatch. It's just syntactic relief for alternate dispatchers, selected by the caller. That last bit is important. Once you commit to a particular dispatcher, you're stuck with it. If you use the ordinary dispatcher syntax, you get the ordinary dispatcher. There's no extra overhead there. In fact, you probably use the ordinary fast dispatcher to get to the alternate dispatcher, as if it were just an ordinary method call. The rest is just syntax.
        That last bit is important. Once you commit to a particular dispatcher, you're stuck with it.

        So if I understand correctly, if I chose for a particular method call to use breadth-first, instead of the canonical C3, then it will apply for that particular method call only.

        That is still insane, but not as bad as I originally thought :)

        -stvn

      I do know that the only language still in use today as old as LISP is FORTRAN.

      Just a data point: my brother recently started a new job where he is learning COBOL for the first time.

      He's a mainframe programmer of 25-30 years standing that spent most of his life in the airline bookings industry, but the jobs there have dwindled and mostly moved to the US; his new job is in the banking sector.

      COBOL is probably the highest-level language he's had the opportunity to use in anger.

      Hugo

Re^6: Informal Poll: Traits (Long Post Warning!)
by tilly (Archbishop) on Nov 20, 2005 at 13:15 UTC
    Just filling in a couple of details.

    First of all you'll be glad to know that LISP is fully dynamic by your meaning of the phrase. No, I don't know what performance tricks it uses. Secondly I can confirm that Ruby is dynamic in all particulars you discuss except the fact that you can't change the class of an object at runtime. I should note that Ruby also does not allow multiple inheritance.

      Can you (or anyone) think of any case where reblessing is useful, in the sense that solving the same problem another way would be awkward?

      Makeshifts last the longest.

        When you have a generic proxy object like Object::Realize::Later, which more or less implements lazyness for method calls, it is quite convenient to have the object change class after it has been realized. Otherwise, lots of (brain-dead, I admit) checks fail when they ask UNIVERSAL::isa($obj,'foo');. Of course one could circumvent this problem with multiple inheritance or by writing a specialized ::Proxy class for every class to be lazy ...

        I think the time I'm most likely to rebless an object is if I want to subclass an existing package to get some modified behaviour. Mostly in such cases I can simply inherit from the base class and Subclass->new will do the right thing, but in some cases the base class's new relies on the invoked class's name to create the right object.

        My work application has occasionally needed such tricks, since the underlying database abstraction uses the class name to find the object-to-database mapping information. However as of now, there is only one example of such reblessing in 50 KLOC (and that in a proof of concept utility that won't be updated), since most of the original needs for it were removed when the database abstraction was modified to call the invoked class's bless method. We do have 6 examples of classes that overload bless to do various interesting things, and most of those would originally have reblessed the objects instead.

        Hugo

        I have objects that are lazily evaluated using rebless. Basically the objects represent a pricing scheme. Until $obj->price() is called the objects are just a hash of the parameters required to run a query out of the DB to get the data and are of the class 'Pricer::Stub', when Price::Stub::price() is called the routine extracts the required data and converts itself into a real Pricer object. And then calls price() a second time on the new object.

        Code using the Pricer object never knows or cares which object type is involved, as the Pricer objects are manufactored by a factory object.

        I used this because the $obj->price() method is called very often, and thus I didnt want conditional logic in the price() method itself to handle this behaviour. Personally I think this is a very effective design pattern and I'm happy to use it.

        ---
        $world=~s/war/peace/g

        Can you (or anyone) think of any case where reblessing is useful, in the sense that solving the same problem another way would be awkward?

        I've seen it used with a class hierarchy that was based around incrementally parsing a serialised data structure. As the data was parsed it as reblessed to more and more specific classes as more was known about the structure in question. Quite neat.

        I've also occasionally used reblessing to implement state transitions.

Re^6: Informal Poll: Traits (Long Post Warning!)
by Ovid (Cardinal) on Nov 20, 2005 at 15:17 UTC
    Wouldn't it be nice if you could generate your charts or summarise your data, or search your in-memory DB quickly enough that you didn't need to keep the user appraised of the delay?

    I note that you qualified that with "in-memory", so that's a way out, but I just want to point out that Perl is fast enough and computers are fast enough that whenever I have to let my user know of a delay it's almost always due to a complicated database query, heavy disk IO or some request to an external resource. These three things, if slow, will be slow regardless of the language. Yes, Perl is slower than most commonly used languages but much of that can be alleviated with profiling and proper algorithm design.

    And I am unsure yet whether Traits are either the best semantically, or the least likely to degrade performance, of the possible solutions to the problem they address.

    From my personal experience with MI, mixins (I faked 'em via Exporting), Java interfaces and traits, I'm fairly convinced that traits are the best semantically and the lest likely to degrade performance (there's a tiny compilation hit with traits, but it's neglible. 300 tests in my lastest version run in about 4 wallclock seconds). Of course, while my original suspision of the superiority of traits came from my reading about them, my current opinion stems from my having experience with traits. I keep hearing in this thread comments which sound dangerously close to "I won't use traits because I haven't used traits" or "I won't use traits because other's don't". I find the first argument to be stupid. We never learn anything that way. The second argument might have a bit of merit ... being afraid to lead the way is often a survival traits ... but that doesn't mean people can't try them on non-critical systems so they can find out for themselves whether or not they're worthwhile. Folks railing against a technology they've never used just strikes me as a bit odd.

    The major questions I wonder about are whether or not the implementation I am maintaining has any bugs and what features need to be added/tweaked. However, I'm also not saying Class::Trait is the best choice, either. There are several competitors out there which offer different interfaces and capabilities. Just because Class::Trait is the most feature-complete doesn't make it the best. Still, I'd hare for folks to let fear of the unknown keep them from tying out these technologies. They really have made my life simpler at work (and I fully realize that my knowledge of real-world use of traits is relatively new. It's better than most have, though :)

    Cheers,
    Ovid

    New address of my CGI Course.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://510196]
help
Sections?
Information?
Find Nodes?
Leftovers?
    Notices?
    hippoepoptai's answer Re: how do I set a cookie and redirect was blessed by hippo!
    erzuuliAnonymous Monks are no longer allowed to use Super Search, due to an excessive use of this resource by robots.