Beefy Boxes and Bandwidth Generously Provided by pair Networks
There's more than one way to do things

Re^4: CPU cycles DO NOT MATTER!

by moritz (Cardinal)
on Apr 17, 2008 at 16:57 UTC ( #681241=note: print w/replies, xml ) Need Help??

in reply to Re^3: CPU cycles DO NOT MATTER!
in thread CPU cycles DO NOT MATTER!

If you're fitting twice as many transistors into the same space, then the same chip design using the new process will cost about half as much because you can produce twice as many per wafer

You're neglecting the fact that smaller structures need purer "raw" material, which implies much more preprocessing.

Also this calculation only works as long as you can get lamps with sufficiently short wavelengths. If you can't anymore, you have to resort to electron beams, which are slooooow.

To turn a tiny bit more on topic: this a good example that you can't just stop thinking at one abstraction layer, which we recently discussed in another meditation ;-)

Replies are listed 'Best First'.
Re^5: CPU cycles DO NOT MATTER!
by mr_mischief (Monsignor) on Apr 17, 2008 at 17:28 UTC
    The old adage "if all other things are held equal" applies here. It does get more complicated when other things can't be held equal, as you've pointed out.

    There are actually a number of issues with the continuation of Moore's Law. You've pointed out two, but things like current leakage at such small sizes requiring research into different substrate materials altogether come to mind as well. The very fact that tighter spaces mean less forgiving clock variances is one to consider. That's very similar, if not the exact, problem that bit AMD recently with the bugs in their initial Phenom and Barcelona-core Opteron releases.

    The world's starting to scale programs out rather than just up in response to these difficulties. It would be wise for us as programmers to work on optimizing those programs that really need it by using more of the hardware that's available rather than counting on faster individual threads of execution to roll around. Threads, forked processes, clustering, and all manner of IPC and communications need to be considered. It's a whole new world of how programs will be written for performance, and the "faster processor next year" mindset is wholly outdated in it. Soon, we might be talking about optimizing programs for minimal state passing between disparate hardware nodes vs. installing more servers in a rack instead of optimizing for a single processor vs. buying a faster processor for many applications.

    There is hope with materials science and electronics research, like germanium arsenide, gallium arsenide, graphene, and spintronic storage for Moore's Law to continue for some time. However, scaling hardware both up and out instead of just up has made it all the way to the desktop already.

    Unless we're ready to, as programmers, resign ourselves to more programs running at modestly faster speeds rather than making single programs run leaps and bounds faster, then we need to keep in mind that concurrency really does matter.

    As Perl programmers specifically, we need to be working on getting Perl to support many kinds of concurrency and how to use that support to our advantage. Perl6 has many things that may be implemented in a way that supports concurrency better than Perl5 does, such as hyperoperators, explicit support for coroutines, and stronger support for serialization/deserialization of data and code to name a few.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://681241]
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others chanting in the Monastery: (8)
As of 2021-06-21 11:35 GMT
Find Nodes?
    Voting Booth?
    What does the "s" stand for in "perls"? (Whence perls)

    Results (98 votes). Check out past polls.