Beefy Boxes and Bandwidth Generously Provided by pair Networks
Don't ask to ask, just ask
 
PerlMonks  

Algorithms, Datastructures and syntax

by BUU (Prior)
on Apr 04, 2003 at 15:07 UTC ( #248077=perlmeditation: print w/ replies, xml ) Need Help??

I've been having a "debate" with one of my friends upon the nature of many computer fundamentals. His basic premise is that datastructures are the same as algorithsm, both of which are specific to a language. (My premise, is of course, the correct one =] ). A concrete example of this, is his arguement that a "for loop" in ASM would be considered a data structure. Can someone please shed some light here, perhaps with slightly more authoritive sources?

Comment on Algorithms, Datastructures and syntax
Re: Algorithms, Datastructures and syntax
by Heidegger (Hermit) on Apr 04, 2003 at 15:39 UTC

    A for loop in ASM usually sets a counter in the beginning of the block and at the end of the block it checks counter's value and jumps to the start of the block if the for loop isn't finished yet. Where does your friend sees a data structure here? By declaring a data structure we declare a variable or a set of memory cells. There is no program execution involved in it.

    In general you can say that in a programming language we have data and control structures. The for loop clearly belongs to the control structures, since it changes program execution path (a jump in ASM).

    Such 'debates' are quite dangerous, once I was arguing with my co-worker about DOS. I was claiming that it's an operating system, no matter how simple. His point was that it's not an operating system. He said we should call BIOS an operating system then. And so we continued to argue on such a pointless thing till we started to treat each other like an enemy. Not so good. Might have been bad chemistry between us ;-)

      Though, with a handle like Heidegger, I can see why you might be predisposed to arguments that can (potentially) be viewed as pointless. :-)

      ------
      We are the carpenters and bricklayers of the Information Age.

      Don't go borrowing trouble. For programmers, this means Worry only about what you need to implement.

      Please remember that I'm crufty and crochety. All opinions are purely mine and all code is untested, unless otherwise specified.

Re: Algorithms, Datastructures and syntax
by dragonchild (Archbishop) on Apr 04, 2003 at 15:41 UTC
    Using yours truly as an authoritative source will probably only win points with me and mine, but here goes:

    Algorithms are processes that work upon very specific data structures. They are extremely dependent upon the data structure(s) that they are defined to work upon. A concrete example is the for-loop. In every language, it's defined to work on some list. For-loops don't work on hashes or trees or any other data structure.

    Data structures are, well, ways to structure data. They are independent of algorithms. Often, the same data structure will be used in numerous algorithms. The best example of this is the list, used in sorting, searching, iterating, etc.

    The pedagogical mistake made in most CS classes is to focus overly much on algorithms without enough focus on data structures, especially early on in the curriculum. Many students are left with a feeling that they have no idea how to think about their stuff, praying that the specific algorithm will work correctly, usually with much hand-waving.

    ------
    We are the carpenters and bricklayers of the Information Age.

    Don't go borrowing trouble. For programmers, this means Worry only about what you need to implement.

    Please remember that I'm crufty and crochety. All opinions are purely mine and all code is untested, unless otherwise specified.

Re: Algorithms, Datastructures and syntax
by diotalevi (Canon) on Apr 04, 2003 at 15:42 UTC
Re: Algorithms, Datastructures and syntax
by Abigail-II (Bishop) on Apr 04, 2003 at 15:49 UTC
    I've graduated in the field of Algorithms and Datastructures, and done academic research in that field for four years. I've always considered algorithms and datastructures to be equivalent. If I give you the datastructure, the algorithm is usually obvious, and visa verca.

    Both algorithm and datastructure are language independent. Claiming they are language specific is saying that plays are language specific. A "for loop" is not an algorithm, for the same reason a "sentence" isn't a novel.

    Abigail

      Could you not say that a for loop is an algorithm for counting from one number to another number? Or perhaps an algorithm for repeating a particular piece of code a certain number of times? I think you can abstract a for loop into an algorithm. I dont think that there is much use to do so, but from the philosophers chair?
        Wax practical, poetic even, not philosophical, otherwise how do you know all those program you wrote really exist and aren't a dream? (don't try to answer, just forget philosophy, it's got no place among programming in the real world -- it's not practical).


        MJD says you can't just make shit up and expect the computer to know what you mean, retardo!
        I run a Win32 PPM repository for perl 5.6x+5.8x. I take requests.
        ** The Third rule of perl club is a statement of fact: pod is sexy.

Re: Algorithms, Datastructures and syntax
by Anonymous Monk on Apr 04, 2003 at 15:51 UTC
    Data
    Information. Integer, string, possibly code even.
    Data Structure
    Data organised in a specific way. In abstract any data that is not simply bits. An integer for instance is a datastructure, insofar as negative numbers are encode in a particular way. Normally however the term is not used unless the data contains pointers to other objects, or and especially copies of itself. Many have names: linked list, queue, stack, etc.
    Algortihm
    Algorithms are processes for achieving a goal. When you do divide and conquor to "guess" the right number between 0 and 100 you are doing an algorithm. When the computer converts an integer stored in binary into a decimal for you to read it is performing an algorthm.
    Algortihm + Data Structure = Computer Program
    Normally datastructures and algorithms are pretty intimately related. Certain algorithms need their data organized in certain ways. In other situations an algorithm doesnt care. So long as it can interact with the data in a particular way it doesnt worry. For instance many algorithms use heaps. Heaps in themselves can be implemented many ways, using different data structures and algorithms to implement them. So the algorithm _implementing_ a heap cares how its data is organized, but an algorithm _employing_ a heap doesnt care how the heaps data is organized.

    I dont really see what your friend means about a for loop in assembler being a data structure. A for loop in assembler (depending on what this means...) is only a handlful of ops and would probably employ a register as its "data-structure".

    The only possible idea I have about your freinds point is that sometimes one wishes to treat code as data. And then you could have an algorithm which operates on that data (which is really code) Consider a compiler, it may indeed wish to treat a for loop as data, and modify it in a particular way. Until the code gets executed its still data. And maybe even later.

    In a tongue-in-cheek way of thinking about it you could say that the algorithm part of a program is every instruction the cpu executes, and the data is everything else. This overlooks the difference between an algorithm and a heuristic, and when code is data and data is code, but whatever.

    :-)

Re: Algorithms, Datastructures and syntax
by BrowserUk (Pope) on Apr 04, 2003 at 20:41 UTC

    What your friend is saying does make a certain amount of sense, especially if the "for loop" in question is a perl style for @array type of loop. In this case, the iteration is not being control by an abstract control mechanism of a numeric count, but is being controlled by the volume of data, the array in this case.

    You could translate that into the algorithmic definition of "iterate through the array". The nice part of this is that the neither the size, type nor any other physical characteristic of the data is involved in the algorithm. If the array is empty, nothing happens. If the array contains disparate sized elements--chars, ints, floats, long longs, strings etc.--it doesn't change the code or the algorithm You simply don't need to no how many or what type of data is held in the array. Compare this with a C-language for loop which relies on all the elements being the same size.

    Compare it also with using a c-style for loop in perl which requires at least one additional variable (index) and one additional constant (the stop condition)., and has the downside that the number of repetitions of the loop can be affected during the iteration of the loop, by adjusting/corrupting the current values of either the iterator or the stop condition.

    In the perl type for/foreach loop, the algorithm is entirely defined by the data, so the statement that the algorithm is the data starts to make some sense.

    Similarly, this technique of capturing the algorithm in the data is nicely demonstrated by the common perl technique of using a hash to count the unique elements in an array. It's instructive to try and do this task in some language that doesn't have either hashes or dynamically sized & dynamically grown arrays.

    The first thing you need to do is allocate some space to hold the counts of the unique elements. The obvious thing to do is to create an array to hold the counts--but how big an array do you need? There is no simple way to determine this. It could be that every element in the array who's elements are being counted is unique, in which case you need an equivalent size array to hold the counts. Conversly, they could be all the same, in which case you only need one count. In between, the variations are unknowable. You cannot even make one pass to determine how many counts you need, allocate and then a second pass to do the counting.

    If you are using C, you might use a recursive routine that starts with the first element, then scans the rest of the array, counting and marking all the similar elements it finds. It then saves the value and associated count on the stack whilst it recurses to find, count and mark the second element and so on until a pass completes without finding any unmarked elements. If the routine also keeps a global count of the number of passes as it recursed, it can now allocate the space to hold the counts and populate it as the recursion unwinds. As you can see, this is a multi-pass process that will consumes prodigious amounts of stack if the array is large and has many unique elements. Even given the speed of C, this is not going to be fast, and as each programmer will have to roll-his-own solution everytime this requirement comes up, it is costly and time consuming to code and carries a high risk of bugs.

    The simplicity of doing this in perl is not due to any particular high-level language feature of perl, but is directly attributable to the high-level data-structures that perl provides. This allows the capturing of this complex algorithm directly in the data-structure (hash) used to accumulate the counts.

    The real beauty of perl lies in its data types--a strange thing to say about a language that is known as an untyped or weakly typed language. It is the total generality of perls data types, their polymorphic nature and the clever choice of a small number of carefully chosen language contructs for manipulating them that allows (for example) perls arrays to be used as traditional arrays of chars/ints/floats/strings or any combination of them; as stacks, fifo's, lifo's; to be sliced and spliced; and the transparency of the underlying mechanisms to the perl programmer that are the real power of perl.

    If you've ever spent time wading through the C++ STL or the Java API documentation trying to figure out whether you need a Bag class or an OrderSet, and UnorderedSet or Vector or which of the many other myriad variations on this theme to capture the particular semantics of your dataset. Or if you have tried to then adapt the resultant code to account for small changes to the operational requirements of the data, and tried to force fit these into one of the provided datatypes; or worse, tried to find or hack the mechanisms to allow you to convert from one of these mechanisms to another, then you will really begin to appreciate the power and flexibility of the perlish way.


    Examine what is said, not who speaks.
    1) When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
    2) The only way of discovering the limits of the possible is to venture a little way past them into the impossible
    3) Any sufficiently advanced technology is indistinguishable from magic.
    Arthur C. Clarke.
      What a bunch of balony.

      Compare it also with using a c-style for loop in perl which requires at least one additional variable (index) and one additional constant (the stop condition)., and has the downside that the number of repetitions of the loop can be affected during the iteration of the loop, by adjusting/corrupting the current values of either the iterator or the stop condition.
      And how do you think Perl allows you to do a foreach? Just because you don't see the bookkeepping at a language level doesn't mean it doesn't exist. Besides, with a foreach loop, you get an additional variable as well - the iterator. Even if there's non mentioned in your program source, you still get a new $_.

      You say the number of iterations can be effected by changing the value of the iterator (I don't see how you can change the stop condition, C doesn't have an eval, and even Perl doesn't allow you to change code that was already compiled), and call that a downside. I would call that a feature, making a C-style for more flexible than a Perl-style foreach. For instance, try to translate:

      for (my $i = 0; $i < @array; $i += 2) { ... }

      to a foreach style. It's not easy.

      In the perl type for/foreach loop, the algorithm is entirely defined by the data, so the statement that the algorithm is the data starts to make some sense.

      What algorithm are we talking about? A algorithm might say "iterate over the elements of the set". A for or a foreach might be the implementation of that algorithm in a specific computer language, but that's not the algorithm itself.

      If you are using C, you might use a recursive routine that starts with the first element, then scans the rest of the array, counting and marking all the similar elements it finds. It then saves the value and associated count on the stack whilst it recurses to find, count and mark the second element and so on until a pass completes without finding any unmarked elements. If the routine also keeps a global count of the number of passes as it recursed, it can now allocate the space to hold the counts and populate it as the recursion unwinds. As you can see, this is a multi-pass process that will consumes prodigious amounts of stack if the array is large and has many unique elements. Even given the speed of C, this is not going to be fast, and as each programmer will have to roll-his-own solution everytime this requirement comes up, it is costly and time consuming to code and carries a high risk of bugs.

      Oh, come on, get real. Just because something is written in C doesn't mean it's using stupid algorithms. One can use hashes in C as well (how do you think Perl hashes are implemented in perl?), and any programmer worth its salt will do so. It's an elementary programming technique. Noone in its right mind is going to employ the algorithm you are proposing.

      And indeed, you don't know how many elements the hash is going to need (although there's a trivial upper bound!). But then, you don't know that in Perl either, do you? And just as perl can grow a hash on demand, you can grow a hash in C as well. Amazing, isn't? (Oh, could it just be that perl is written in C?)

      The simplicity of doing this in perl is not due to any particular high-level language feature of perl, but is directly attributable to the high-level data-structures that perl provides. This allows the capturing of this complex algorithm directly in the data-structure (hash) used to accumulate the counts.

      What high level data structures? Perl has arrays and hashes. That's it. Nothing that would give you elements in some order. The only efficient search that you can do are exact matches using hashes. But no range queries, nearest neighbour, next, not even minimum.

      Of course you can build all this stuff, but then, so you can with C.

      Abigail

        Of course you can build all this stuff, but then, so you can with C.
        0001 0001 0101 1100 1000 1011 1010 0010 1101 0111 1110 0010 1000 0100 1000 0111 1010 1001 0111 1100 0111 1111 0111 0111 1110 1000 0011 1011 1101 1010 1111 1100 1100 0100 0011 0110 1010 1010 1010 0010

        See?

        What a bunch of balony.

        I'll come back to this.

        And how do you think Perl allows you to do a foreach?

        - The point it, I don't need to know.

        Just because you don't see the bookkeeping at a language level doesn't mean it doesn't exist.

        - Of course it doesn't. What its absence at the language level does mean is that as a perl programmer I don't have to think about it.

        Besides, with a foreach loop, you get an additional variable as well - the iterator. Even if there's none mentioned in your program source, you still get a new $_.

        Yes, but that additional variable is managed by perl, not by me. I don't have to name it--though I can--but more importantly, I don't have to manage it. I don't need to use sizeof to determine how much to increment it by; or dereference it to read or write to the array element. I don't have to concern myself with the type of the data the element of the array $_ is aliasing is, I just go about my business of using it and allow perl to take care of the process of managing the array.

        You say the number of iterations can be effected by changing the value of the iterator (I don't see how you can change the stop condition, C doesn't have an eval, and even Perl doesn't allow you to change code that was already compiled), and call that a downside.

        In C, I can increment or decrement or zero the iterator and therebye change the number of iterations of the array. Used deliberately, this can be useful, but done accidently and you get code that runs of the end of the array and corrupts other data. Show me a case of an algorithm that uses the ability to modify the iterator within the body of the loop to good effect, and (in most cases), it will be possible to re-code the algorithm to not require modifying the iterator and will result in more reliability and more transparent code.

        With regard to changing the stop condition: often this is coded in terms of for (int i=0; i < somevar; i++) .... If somevar is modified, the stop condition changes. In perl, the stop condition is defined by the size of the list. This cannot be accidentally modified within the loop

        $n=10; for (1..$n) { $n=0 if $n==5; print $_, $/; }

        Even though the list is generated at runtime using lazy evaluation, attempts to modify that list within the bounds of the loop fail. The above loops prints 1 through 10 despite the value of $n changing. I agree that this reduces the flexibility of the loop, but if the algorithm necessitates the change, then this can be explicitly written (eg. last if $n = 5), rather than modifying the stop condition or the iterator. Whilst this can also be done (using break;) in C, it isn't the explicit modification that catches people out, it's the accidental modification that causes problems. This doesn't happen in perl.

        I would call that a feature, making a C-style for more flexible than a Perl-style foreach.

        Agreed it's more flexible, but it is also more error prone.

        For instance, try to translate:

        for (my $i = 0; $i < @array; $i += 2) { ... }

        to a foreach style. It's not easy.

        One way: for ( @array[ map{ $_*2 } 0 .. @array/2 ] ) { ... }

        I'm not saying that this is better in every case, nor even that it is more intuative, but if the algorithm requires the manipulation of every other array element, this achieves that. There are other ways of doing this depending on the needs of the algorithm, see Re: Fold a list using map splices for another that is useful when you want to manipulate the array elements in pairs (or 3's or 4's etc), which is often the case when this type loop is used. Or Re: Starting foreach at an arbitrary position and Re: Non-destructive array processing for some interator solutions using closures.

        The inability of perl iterators to iterate in steps is, I believe a known limitation that is overdue for being addressed and I also believe due to be addressed in Perl 6?

        And of course, perl gives you the c-style for as well as the foreach for those occasions when it is required. The point I was trying to make is that for those occasions when the algorithm requires iteration over every element of an array, the foreach style loop removes many of the gotchas that pervade achieving the same purpose in C.

        I never for a moment tried to imply that C cannot do anything that perl can do, only that perl hides many of the details that make it easy to make mistakes--starting the loop from 1 instead of 0; using <= instead of < in the terminating condition; etc.

        What algorithm are we talking about? A algorithm might say "iterate over the elements of the set". A for or a foreach might be the implementation of that algorithm in a specific computer language, but that's not the algorithm itself.

        I disagree. If the algorithm calls for "iterating over the elements of a set", that is exactly what the for/foreach loop does. I don't need to know how many or what type of elements are in the set, I just give the set to the for and it gives me them back one at a time until until I'm done. That's it. Said that way it may sound like a simple algorithm, but I would like to see a C implementation of an equivalent generic loop. Of course it can be done in C--that's how perl does it--but in order to acheive it in a way that the application programmer can use in his own algorithms, at the point of use, with the same level of simplicity you are basically going to have to re-write perl arrays. I have never seen a C-library that comes even close.

        Oh, come on, get real. Just because something is written in C doesn't mean it's using stupid algorithms. One can use hashes in C as well (how do you think Perl hashes are implemented in perl?), and any programmer worth its salt will do so. It's an elementary programming technique. Noone in its right mind is going to employ the algorithm you are proposing.

        Yes, of course you can implement hashes in C. I recently did exactly that, or at least converted a C implementation into Perl, if you recall. And yes, that would be a stupid implementation, but that was the point. The precursor to the example was

        It's instructive to try and do this task in some language that doesn't have either hashes or dynamically sized & dynamically grown arrays.

        I described that method to emphasis the benefits of having hashes available as a part of the language, but even implementing the hashes is not the complete solution, you still need the dynamic arrays.

        And indeed, you don't know how many elements the hash is going to need (although there's a trivial upper bound!).

        True. A trivial upper bound is available, but if the dataset is not memory bound, or too large to be momory bound, as might be the case with Tie::File for example, then the option of allocating an array of the size of that trivial upper bound may not be practical. Even when it is, it would be extremely costly in memory to allocated a 100_000 element integer array only to discover that there where only a handful of unique elements in the array. If the array is to big to fit in memory, then at least a second pass is required to discover how big to make the array of counts unless the elements are conveniently all of a fixed size that would allow a mathematical determination of the upper bound.

        But then, you don't know that in Perl either, do you? And just as perl can grow a hash on demand, you can grow a hash in C as well. Amazing, isn't? (Oh, could it just be that perl is written in C?)

        What's amazing is that you can read the same words I wrote and mis-interprete the intention of them so emphatically. The intent was to emphasis the benefits of having these features available in perl, of which the major ones are that you don't have to implement them yourself; and that you get a proven implementation out-of-the-box. Essentially the same benefits that come from any code re-use.

        What high level data structures? Perl has arrays and hashes. That's it.

        Perl's arrays are infinitely higher level than C's arrays. They are type polymorphic, dynamic and extensible. You better than most know that.

        Yes, you can realloc and memcopy array space in C, but that doesn't even come close to acheiving the same result, as each element is still of a fixed size.

        Yes, you can make the array, an array of pointers, and the pointers can point to any type of element, but you then have to manage not only the allocation, deallocation, growth and shrinkage of the array of pointers, but also the allocation/deallocation of the things pointed at, and you still haven't got there as you also need to then remember the type of data pointed at by each pointer. Even with this ability added, now try splicing a new set of elements into or out of that array of pointers and you have a whole new set of problems to address.

        So now you implement the array of pointers as a single linked list of pointers to structures that contain type information and the data (or a pointer to it) and you have come close to re-implementing the Perl array.

        Now add the structures and algorithms required to allow this data structure to be linked (tied) to a file of CVS or XML data, or a DB or....

        Now try and write the generic loop to iterate over each element of that implementation with all the pointer chasing, dereferencing, type and bounds checking that involves.

        Once you have done that, you will have what I think can rightly be categorised as a "high-level data structure", and what perl gives me out-of-the-box.

        Nothing that would give you elements in some order. The only efficient search that you can do are exact matches using hashes. But no range queries, nearest neighbour, next, not even minimum.

        Iterating an array could be said to be giving you the elements in 'some order' as could iterating the sorted keys of a hash, but I suspect that you are alluding to the absence from perl of a generic tree data-structure, and I concur that this provision would be a real benefit to perl. Whilst it is possible to implement trees of various kinds using perl, the overhead of switching in and out of the low-level and high-level representations incures a heavy penalty. It would be nice if the language could provide the programmer with high-level building blocks that would allow him to construct various kinds of trees whilst allowing the greater majority of the pointer chasing and node traversal to be performed by efficient (and tested, accurately coded) low-level code by perl itself. However, I (without having spent a lot of time thinking about it) cannot see how this might easily be done. Almost every action that can be performed upon a tree of any flavour requires an intimate knowledge of that flavour. It seems to me that to build a generic set of tree building blocks would require the programmer to supply a callback to Perl code that would be invoked for each node during every operation--insertions, deletions, traversals, rebalancing etc.--and these callbacks would severly restrain the performance and render the benefits of the provision minimal.

        If you can see how this could be done, or how any other generic high-level datastructure that perl does not currently have could be added, I'd be interested to here about it.

        Of course you can build all this stuff, but then, so you can with C.

        The entire point of my original post was that Perls arrays and hashes in combination with its polymorphic scalars mean that many algorithms that benefit from encapsulating (much of) the algorithm in the data rather than in code, are simple to implement in perl. This is because it provides a simple set of (what I class as) high-level datastructures than can be used directly, or easily adapted to the implementation of many different algorithms.

        Java or C++ provide extensive libraries of datatypes, but because these datatypes are extensions to the language rather than a part of it, it becomes necessary to also add code to the library to manipulate them rather than being able to manipulate them with a small number of generic built in functions. This means that they become very specific to the purpose for which they are designed which makes it difficult to use them in ways that the implementors didn't think to provide methods for. You can subclass them to add additional functionality but this often comes at the expense of adding complexity to the code that uses them and overhead that would be avoided if the functionality was added at the lower levels. It also makes it even harder to use them in a generic fashion or mix-n-match between them. As each datatype becomes more and more specialised, the problems of api growth, producing good reference documentation and programmer overload becomes more acute. I've already mentioned the myriad Collections classes in the C++ STL, my other favorite example are the Stream classes in Java, with UnbufferedStreams, BufferedInputStreams, BufferedOutputStreams, BufferReaders, BufferedWriters, DataInputStreams, DataOutputStreams, CharArrayReaders, CharArrayWriters, ByteArrayInputStreams, ByteArrayOutputStreams, ObjectInputStreams, ObjectOutputStreams, FileReaders, FileWriters and so on ad nauseam.

        My experience of using these is that I start out reading and reading trying to decide which is the correct one for my purpose. I eventually settle upon one and start coding, I then hit an impasse where the class I have chosen doesn't allow me to do what I need, so I have to either switch all the code to a different class or coerse one type into another for a particular part of the algorithm. Then, as often as not, the requirements change or I find a limitation or omission in my (or someone elses) original specification, and find myself having to go back and do some or all of the code over to accomodate the change or belated discovery.

        In my still less than one year of using perl, I have yet to encounter this scenario. I am constantly amazed by the flexibility of perls few but oh-do-useful datatypes and the way they lend themselves to use in so many algorithms. That's not to say I haven't recognised some limitations--tree structures as described above is one such--but I am still impressed as hell with what I can achieve with the basic types before I need to resort to constructing my own. I'm also impressed by the simplicity of mechanisms provided for extension when I do need to resort to that.

        Like you, I wish that the OO mechanisms in perl where a little less ad-hoc although your own inside-out technique addresses many of those concerns. I also wish that the runtime costs of tieing, overloading and OO was less. I wish that the tieing mechanisms for aggragate data types allowed me to process subset parameters en-masse rather than individually as now; and I still baulk at splitting a scalar into an linked list of pointers to structures, each to hold a single 8-bit value, just to get at the individual chars. Using substr, /(.)/g or chop just obscures the algorithm with detail and unpacking is only a slight more efficient way of splitting to an array--I really miss the syntactic simplicity and efficiency of accessing the bytes or chars of a string directly using an array style syntax. I have also encountered situations where the overhead of these datastructures means that the flexibility provided by them is unneeded and is therefore pure overhead. This in turn can lead to the situation where the lack of control over the size and type of memory allocated to hold the data means that algorithms that would perform very well if the whole dataset could be fit in memory, becomes slow and inefficient because of the need to page the dataset to and from disc. Whether this paging is done through the OS, a perl mechinism like Tie::File or DBI or via a RDBMS. I had started devising my own modules to alleviate this, but discovered that tieing imposed it's own penalties. I also saw that Perl 6 is to address these issues with attibutes and so have ceased to further develop the ideas. I have modules for my own use for the interim.

        All of that said, for the most part, I find that perl allows me to concentrate on the bigger picture rather than getting bogged down in the nitty-gritty of data storage and representation more of the time than any other language, and I attribute this to the power and flexibility of Perls few, but high-level data types. You are completely at liberty to call this "balony" if you wish, but cogent argument is more impressive than such dismissal.


        Examine what is said, not who speaks.
        1) When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
        2) The only way of discovering the limits of the possible is to venture a little way past them into the impossible
        3) Any sufficiently advanced technology is indistinguishable from magic.
        Arthur C. Clarke.
Re: Algorithms, Datastructures and syntax
by mojotoad (Monsignor) on Apr 04, 2003 at 22:35 UTC
    Another way to look at it sometimes is this: in order to populate a data structure, you will have had to perform some algorithmic work in order to do so. Usually this pre-organization of the data will facillitate future data manipulation, making certain tasks easier or more efficient. In this sense, data structures can be seen as devices (such as a battery or reservoir) that store a certain amount of work -- once the work is done, you can reap the benefits time and time again in the future.

    You see this also as a trade-off between time and space, such as with the Schwartzian Transform.

    The real world is full of analogies (or vice versa) once you think of it in terms of work -- pulleys, levers, catalysts, etc. The task and physical circumstances ultimately define the approach towards a solution.

    Matt

Re: Algorithms, Datastructures and syntax
by pg (Canon) on Apr 05, 2003 at 16:32 UTC
    Leave syntax alone, I would say data structure is not algorithm, but those two are so mixed.

    A clever algorithm usually has one and more clever data structure(s) to support it, or even make it possible. At the same time, a data structure is clever, usually because it can be used in some clever algorithm, and improve it.

    Is "for loop" in ASM a type of data structure? Your friend should reconsider his position on this. It migth seems to be data structure from certain perspective, but it is definitely not. Part of the strength of human thinking is the ability to differentiate seemingly similar stuffs from each other.

    Data structure is how things are organized, not how they are being handled.
Re: Algorithms, Datastructures and syntax
by gunder (Novice) on Apr 07, 2003 at 14:18 UTC
    My personal fundemntal problem with this argument is that we have made very simple things way to complex. Is this not a perl site... isn't that the fundemental idea about the langage? I personally like the 1000 1001 1011 1001 1101 argument. Take it for what it is and tell your friend to have another beer. The machine is stupid, so it comes down to a big tape and a reader... however to the smart man its control and states in a finite environment. You are all right... ok, and that is that... Next topic.
      Just to throw my 2 cents in. I think you are complicating this far more then necessary. Here are a couple of definitions I like from the National Institute of Standards and Technology. DATA STRUCTURE: Definition: An organization of information, usually in memory, for better algorithm efficiency, such as queue, stack, linked list, heap, dictionary, and tree, or conceptual unity, such as the name and address of a person. It may include redundant information, such as length of the list or number of nodes in a subtree. Note: Most data structures have associated algorithms to perform operations, such as search, insert, or balance, that maintain the properties of the data structure. ALGORITHM: Definition: A computable set of steps to achieve a desired result. Note: The word comes from the Persian author Abu Ja'far Mohammed ibn Mūsā al-Khowārizmī who wrote a book with arithmetic rules dating from about 825 A.D. http://www.nist.gov/dads/ -spiderman

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlmeditation [id://248077]
Approved by broquaint
Front-paged by TStanley
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others perusing the Monastery: (5)
As of 2014-08-21 09:28 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    The best computer themed movie is:











    Results (129 votes), past polls