Beefy Boxes and Bandwidth Generously Provided by pair Networks
We don't bite newbies here... much
 
PerlMonks  

Re: "Native Perlish"

by Abigail-II (Bishop)
on Mar 26, 2003 at 22:27 UTC ( [id://246116]=note: print w/replies, xml ) Need Help??


in reply to Re: Re: "Native Perlish"
in thread "Native Perlish"

You use a system call. Do you know what your kernel is actually doing? That is, do you know how NT works internally?

Do you have man pages for your tools, system calls, configuration files?

You have a program, it runs slower than you expected it. You want to know how much time it spends in each system call. Can you do that in NT?

Your service contract says the service you provide needs to be available 99.999%, 365 x 24. Can NT and its cluster solution provide this?

Your important application requires at least 16 CPUs and 128 Gb of memory. You want the ability to add, remove and replace CPUs and memory, without the need to reboot.

Finally, you want to use a script written by Abigail. Abigail doesn't cater for NT. ;-)

Abigail

Replies are listed 'Best First'.
Re: Re: "Native Perlish"
by BrowserUk (Patriarch) on Mar 27, 2003 at 03:21 UTC

    You use a system call. Do you know what your kernel is actually doing? That is, do you know how NT works internally?

    I guess I can re-word that as "Do you have the sources to the kernel?", and the obvious answer is no. As it happens, I worked on the internals of OS/2, version 1.0 through 2.2 for a period of about 8 years, and much of the internals have a common heritage, but even if I could dig out some old tape backups of the source trees from back then, it's doubtful if they would do me any good these days.

    That said, I have to question the value of having OS source access. Yes, it may allow you to track down the source of the problem and even arrive at a patch. And, if I am talking about my home system, or even an application running on a single or small number of systems in a commercial environment, I could apply that patch to that/those systems and get on with life. This is even possible if the application is designed to run in-house where I can control all the software components that run on and with the application requiring the patch. The story can be different however if the application is to be supplied to be run on systems outside of my control. Any patch I come up with to 'fix' the OS for my application, may break other applications. In the Unix world, it is quite possible that the people to whom I am going to supply the application are running a different flavour of unix, which may or may not be similalry afflicted. If I build my application to rely upon a patch, I may create the situation where I need to supply not just the patch, but the entire OS along with the application. This would mean my customers, whether they are paying customers, or just another department or division of the same commercial or governmental body as I work for, have to compromise their laid-down standards for OS's in order to run the application I am supplying.

    I might even attempt to feed this patch back into the source tree at the origin of the OS, but given the hysterysis of the mechanisms that control and enact such patching, it is quite likely to take some considerable time and effort to pursuade them to adopt the patch, and even longer for that to feed back into a version that I might upgrade to.

    It maybe that there are several variations of a given base OS that my customers are using, which coud necessitate my attempting to feed my patch into several source trees. Then we get the situation where one or more of the controlling authorities of these source bases refuses to apply the patch because it would break existing applications that have come to rely upon the bug that I am fixing.

    Perhaps my biggest problem with the unix world is the proliferation of different flavours. Even within the Linux, the newest and currently most popular variation, a quick search at linux.org reveals 180 different distributions. I gave up trying to catalog the distributions in the wider unix world. Even within single companies, there can sometimes be many variations, each specific to the hardware it is designed to run on. So if I am running AIX or HP-UX or whatever, and I have (paid for?) source code access, and I come up with an OS patch to allow my application to run, getting that patch adopted outside of my own organisation is likely to be a long process fraught with all the troubles mentioned above. If my application has to run cross-flavours of unix, the idea that I might make my application work by patching the OS is a non-starter.

    If you (some chance:) accept this, then I wonder (not dismiss, just wonder) about the benefits of having the sources available.

    Do you have man pages for your tools, system calls, configuration files?

    If you'll allow me to read "man pages" as documentation, then yes. The documentation for NT is available in several forms: CD and on the web via msdn. Having the NT Resource Kit for your version of NT/W2K/Xp is a great boon, and isn't soo horrendously expensive, especially of you stick with older versions and can aquire them second-hand or at knock-down prices. One of the reasons I still use NT4 is that I had a subscription to MSDN (billable to my client) for 2 or 3 years, and therefore have a considerable amount of tools and docs that I wouldn't have, or would cost me dearly, if I wanted to acheive the same level of coverage with XP. There are other reasons for my not having upgraded to XP that have much more to do with fundemental concerns about privacy.

    Whether the documentation I have is as complete as the man pages for any given flavour of *nix is debatable. My best guess is that in some cases it is as good, some better, some worse. I doubt whether it is a valid exercise to try and compare them quantitatively.

    You have a program, it runs slower than you expected it. You want to know how much time it spends in each system call. Can you do that in NT?

    Exactly as you have stated it? Questionable. I can however load up the debug version of the kernel, run the application under the auspices of a profiler and or a debugger. I could re-compile the application to use the Call Attributed Profiler (CAP.DLL) that comes as a part of the NT Server Resource Kit.

    Much information regarding memory usage, network usage, disk usage, context switches, etc. etc. (its a very long list) for the system as a whole or for individual applications can be gathered non-invasively (even from a another authorised system the otherside of the world) using PerfMon utility that comes as part of the OS. It will even do it in real time and draw me a nice moving graph as it does so. Given your stated feeling regarding graphics, this may not impress you much, but having had to use PerfMon to track down some extremely obscure bugs, that only ever manifested themselves in the live system, the non-intrusive nature of the tool was a god-send, and the graphical interface made visualising and spotting the problem much, much easier than sifting through the reams and reams of numbers that the graphs represent. The ability to graph two machines ostensibly doing exactly the same thing both in the same display from a third remote system saved my bacon. Twice. The ability to save the profiled data to a file and replay it in "real-time" is also useful. The final part is that there is an API that allows the application programmer to add hooks in his own code that then become accessible via the standard PerfMon interface. This is invaluable if you are developing a widely distributed application. Quite why MS don't make more use of this API in there own applications bewilders me. Much that they do, does.

    Which, if any of these facilities is comparable to the facility that you are aluding to I am not sure, but suffice it to say I can get performance/profile data of almost anything, if the need is great enough.

    Your service contract says the service you provide needs to be available 99.999%, 365 x 24. Can NT and its cluster solution provide this?

    Your important application requires at least 16 CPUs and 128 Gb of memory. You want the ability to add, remove and replace CPUs and memory, without the need to reboot.

    Personally I don't have these requirements from my NT-based portable:) In the same way that you don't concern yourself with portability issues as most of your work doesn't require and possibly precludes it. Currently, my needs do not stretch to needing 16 CPU's or 99.999% availability. I'm currently spec'ing parts for a home-build twin CPU machine. My theory being that if I use a dual-cpu motherboard, I will be able to purchase 2x 1.5 GHz Athlon CPU's for consdiderably less money than I would currently pay for a 3.0 GHz cpu and get (close) to the same performance. And later, when the next generation of 4/5 GHz cpu's comes available, the prices of the 3.0GHz's will have dropped sufficiently to make upgrading relatively cheap. Should see me through another 5 or so years.

    However, I did have a little involvement in looking at, and comparing performance and costs of clustering solutions a few years ago, and the NT solution being looked at back then was Tandem ServerWare later renamed to NonStop Software. I dug up a breif reference to it here. At the time, it was considered comparable on performance (and cheaper by some measure of that term) to the alternatives available to run under unix (HP-UX-11). Looking around, this product line appears to have been incorporated into the Compaq stable through acquisition. The world has moved on a lot--in both the unix and NT scenes--since my breif flirtation with this around 5 years ago, so I have no way of accessing the current state of play, but the 1997 press report mentions 64-cpus and 2-terabytes of data. I would think that 16-cpu's and 128GB of ram is totally possible. 8GB of ram/cpu is far from extreme these days.

    Finally, you want to use a script written by Abigail. Abigail doesn't cater for NT. ;-)

    Funnily enough, I already run several peices of your code on my machine. Thanks to the magic of perl, and the amazing work by some very dedicated people in the P5P group and elsewhere, I have several of your CPAN modules installed and running on my machine quite seccessfully. And very instructive they are too:) Actually, the main reason I have installed many of them is simply the opportunity to learn from them...so on that basis...many thanks.

    Whilst on that note, you once promised to post a description and explaination of your regex solution to the N-queens problem. Any chance?


    Examine what is said, not who speaks.
    1) When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
    2) The only way of discovering the limits of the possible is to venture a little way past them into the impossible
    3) Any sufficiently advanced technology is indistinguishable from magic.
    Arthur C. Clarke.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://246116]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others having a coffee break in the Monastery: (5)
As of 2024-04-19 03:10 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found