This node was taken out by the NodeReaper on Jul 19, 2011 at 15:49 UTC
Reason: [ww]: Mark OT; no Perl content

You may view the original node and the consideration vote tally.

Replies are listed 'Best First'.
Re: Processor or Memory
by BrowserUk (Pope) on Jul 19, 2011 at 08:21 UTC
    with huge increases still just around the next corner,

    CPU frequencies have stagnated at 2.0GHz-3.8GHz since 2004 (the last 2 generations at least). They will not be going any higher in the foreseeable future.

    is not memory usage now the more important factor

    Conversely, memory chips have doubled in size (whilst remaining the same price) every 18 months over that period.

    In two words: no, and no.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      Well OK, the frequency has stagnated, but the architecture continues to improve, continuously bringing more raw computing power to the table with 8 core intel i7 processors about to hit the market and hex-core AMD processors already available.

      Obviously it's best to optimise for both CPU & Memory usage, but the point was if you had to choose between them, which is more important?

        Don't forget, the more memory you use, the more data has to be shoved through the FSB (Do they still call it that?) and the more likely it is that the program will have to call upon data that is not in it's L1, L2 and L3 caches.

        Processors these days have several megabytes (upto about 12) of onboard cache, if your program and all it's data can fit entirely within that amount, it will perform significantly better than if the processor is forced to start swapping out chunks of data to the main memory.

        When your server is dealing with dozens of simultaneous requests, how tight your code uses memory is going to have a very significant impact on the throughput of responses.

        Back in the days of the 486, the FSB used to run at between 16 and 50 mhz, with a multiplier of 2x. Meaning the processor operated at twice the frequency of the FSB.

        These days the FSB operates at around 1333 mhz (faster on some of the very latest boards), with the processor running at around 3-4..GHz, a multiplier of around 3x, however, that FSB bandwidth is also shared between multiple cores, 6 or even 8 on the latest chips, meaning that you have a total of around 24 Hz spread between the cores for each 1hz of FSB bandwidth available.

        In such a situation, the way to get maximum throughput is to make sure as far as possible that your program and the data it needs fits entirely in the L1 cache within the individual processor core, with only external data like database lookups being transported through the FSB.

        So I ask again, is it more important to optimise for memory usage or processor usage, and I assure you... the question has hidden depth.

Re: Processor or Memory
by davido (Cardinal) on Jul 19, 2011 at 18:45 UTC

    Just because computers become faster doesn't necessarily mean we should lose track of what are efficient algorithms. And being computationally efficient doesn't necessarily have to be at odds with memory efficiency either.

    An inefficient sort routine such as the Bubble Sort may be fairly straightforward to understand. So in on Mars where sort doesn't exist, computers are incredibly fast, and people are reluctant to learn about best practices, algorithms, and CPAN tools (that's how they roll on Mars), one might be inclined to write his own sort routine. And in so doing, he may come up with the bubble sort, since it's so simple. He will have re-invented a O(n**2) sort implementation.

    Now over on planet Vulcan, sort exists, as well as a good understanding of algorithms, and common best practices. And over on planet Vulcan, they're religious about learning the ins and outs of CPAN tools, as they are about reading the Perl POD. So they know that there's a better solution. The Merge Sort is a much better algorithm. It runs in O(n log n) time, and is stable.

    Now since the population of Mars is relatively small, and computers are super duper fast there, nobody notices at first how inefficient their home brewed Bubble Sort is. The Vulcans have a higher population, and their computers are busy computing the question to the answer 42, so they value efficiency.

    But then one day Earth sends a Mars rover up there, and it has a couple of specially engineered bunny rabbits on board who have no problem with cold, dry, oxygen-challenged worlds. All they care about is reproduction, and they're quite good at it. Martians take the rabbits as pets and give them all names. Population explosion: Given the Martians are such gracious individuals (have you ever met a rude one?) they can't bear to prevent their pets from doing what they enjoy most. And as loving as they are, they keep naming every one of the offspring. Computers are enlisted to help keep track of everything. But within a few years there are 8,000,000,000 bunny rabbits hopping around. What an opportunity! What an untapped market! Let's export them galactically!

    So the Martians turn to their computer guru. "Please help us to catalog our rabbits so we can list them all on eBay." First they need to sort them into order by date of birth so that the ones nearing expiration get shipped first. Sadly their bubble sort takes 8,000,000,000**2 computations to find the order in which the rabbits should ship out. That's 64000000000000000000 computations! It doesn't matter how fast the computer is if the problem doesn't scale well. In the meantime rabbits consume all the food on Mars, the Martian population succumbs to the plague introduced upon it by the Old World (Earth), and the entire ecosystem collapses; all because an inefficient algorithm was chosen.

    I have a nursery rhyme book that I read to my two year old. There's a poem in it that goes like this:

    For want of a nail,
        The shoe was lost;
    For want of a shoe,
        The horse was lost;
    For want of a horse,
        The rider was lost;
    For want of a rider,
        The battle was lost;
    For want of a battle,
        The kingdom was lost;
    All for the want
        Of a horseshoe nail.
    

    Of course the Vulcans observed the whole thing, but it would have been illogical to go out of their way to prevent the collapse of a civilization that was so unwilling to learn to use efficient tools. They must have gotten Darwin's memo.

    Note, the merge sort would have taken 180000000000 iterations before the first rabbit could sell, but the Vulcans had a copy of John Orwant's "Mastering Algorithms with Perl", and knew that Fibonacci Heap was a better alternative for choosing who goes first out of a huge group. That's the one the Vulcan god uses, but he doesn't reveal how he decides precedence. ;) At any rate, it would be illogical for Vulcans to keep rabbits as pets. But if they did, inserts would occur in O(1) (amortized), and extractions (selling a single rabbit) would take O(log n) at point of sale (22 units of computation). There's no need to actually do any sorting. At that point it just becomes imperative to find enough markets to ship them faster than they reproduce.


    Dave

Re: Processor or Memory
by zentara (Archbishop) on Jul 19, 2011 at 12:40 UTC
    So I ask again, is it more important to optimise for memory usage or processor usage, and I assure you... the question has hidden depth.

    If you have to have an answer, from a pure zen perspective, you would want to optimize for processor usage. Why? Because memory is cheap, super fast processors are not.


    I'm not really a human, but I play one on earth.
    Old Perl Programmer Haiku ................... flash japh
      Well if your gonna be like that we might as well ask if Perl has the buddha nature?
Re: Processor or Memory
by zek152 (Pilgrim) on Jul 19, 2011 at 13:17 UTC

    As has been mentioned frequency has leveled off. CPU companies (read AMD,Intel, and even ARM to some degree) are using multiple cores to increase performance. However many programs, even though they use multiple threads, are not written to take advantage of the 2,4,6,8 cores seen in modern CPUS.

    You asked if it was more important to optimize code for CPU or for memory. Both are important but the greatest gain will come from writing applications that can take advantage of the increasingly parallel architecture that is the future for processors. And of course, compilers,interpreters and virtual machines still need to "catch up".

      Apache2 is multi-threaded, AFAIK, when a request is received it spawns off a new thread which the O/S then schedules on any available core.

        Yes. Apache2 is multi-threaded. However that does not mean that it is optimized for multiple-cores. I can't comment on Apache2 but I do know that often there is an overhead to using a different core (because of memory transfers etc.)

        There has been a lot of progress made in the last few years in getting more use out of those multiple cores. An interesting project is http://openmp.org/wp/about-openmp/. I belive that gcc 4.3.2 has support for some of the pragmas with the -fopenmp option.

        Managing resources across cores is a relatively new challenge and will require new ways of programming to conquer.

Re: Reaped: Processor or Memory
by mr_mischief (Monsignor) on Jul 19, 2011 at 19:16 UTC

    Remember that in certain situations more important than overall memory size is memory locality. What can fit in the cache together and stay there longer will speed up a program on a multi-tiered memory system (which is just about everything running). Programs that make wild jumps through the code or that load a large data structure to touch just part of it and overwrite it again can slow the processor caches to a relative crawl. In a low-level language, you can sort of hand-optimize this stuff. In a mid-level language like C you can sort of hint to the compiler and it takes care of most details. In a high-level language, the tools may or may not have much done in this area depending on the language and implementation.