Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW

Leveraging centralised shared perl version?

by kenm (Acolyte)
on May 07, 2013 at 16:58 UTC ( #1032511=perlquestion: print w/replies, xml ) Need Help??
kenm has asked for the wisdom of the Perl Monks concerning the following question:

Good evening Monks,

Long time lurker, posted the odd question.

I've done quite a bit of digging both on perlmonks and googling and thought I'd ask the question.

To give some background. We have a very wide range of perl scripts with typically hard coded shebang lines executed on a large number of machines.... number of scripts in the thousands (if not tens of thousands) and number of machines in the hundreds. Hence upgrading and managing Perl binary across all machines is probably not feasible.

Our range of locally installed Perl ranges between Perl 5.6 -> 5.8.6 -> 5.8.8 and now also Perl 5.10. We also have a range of OSs, SunOS, Linux, HPUX, AIX and Windows.

Naturally this is far from ideal and I'd like to leverage centralized shared Perl version across all scripts and boxes. Naturally an appropriate Perl binary for the relevant OS.

I'd ideas around dynamically replacing the shebang line to point towards an appropriate Perl binary on a shared drive but not sure if that's the best option.

Wanted to pop it out here to get a handle on others experiences and viewpoints.

Thanks Perl Monks! Appreciate it.

  • Comment on Leveraging centralised shared perl version?

Replies are listed 'Best First'.
Re: Leveraging centralised shared perl version?
by blue_cowdawg (Monsignor) on May 07, 2013 at 17:21 UTC

    I've sort of answered your question here. The crux of it involves using auto_mount maps and variable substitution.
    Variable Meaning
    ARCH output of uname -m
    CPU output of uname -p
    HOST output of uname -n
    OSNAME output of uname -s
    OSREL output of uname -r
    OSVERS output of uname -v
    Milage my vary between platforms but most modern automout daemons support variable substitution.

    An alternative approach would be to use a system configuration tool such as puppet or chef to manage your automout configuration files, especially if you have a variety of versions of Perl (eg 5.8 vs. 5.10) that you need to maintain in your environment since that's not an automout variable.

    UPDATE: Added a page about this on my Perl blog. (offsite link) Managing Perl

    Peter L. Berghold -- Unix Professional
    Peter -at- Berghold -dot- Net; AOL IM redcowdawg Yahoo IM: blue_cowdawg
Re: Leveraging centralised shared perl version?
by ig (Vicar) on May 08, 2013 at 08:41 UTC

    I have never had to manage thousands of Perl scripts, but I have managed hundreds of applications on hundreds of servers and many hundreds of workstations of diverse OS and I have almost always installed perl - perl and everything else locally. One can install multiple versions of perl on a system, in case all the Perl scripts cannot be upgrade to work on the same version of perl, allowing you to upgrade incrementally.

    I have rarely used network drives for executeables and libraries and generally prefer automated system builds and automated software distribution to replicate consistent configurations of software locally onto multiple systems. The shared resource significantly increases the interdependence between systems and turns every little upgrade into a major event requiring weeks to months of testing.

    I wouldn't be interested in dynamically editing the Perl scripts at run time on a per platform basis or per system basis (there can be many relevant differences between systems, even where the OS is the same). If I really felt compelled to share Perl scripts across diverse platforms, I would probably write platform specific wrapper scripts that executed the shared Perl scripts appropriately, so that the shebang lines in the shared scripts were irrelevant. Windows and *nix are different enough that it may not be as simple as rewriting the shebang line. But I wouldn't much like to do this either: I would, as much as possible, move the shared code into platform independent modules and write minimal / trivial platform specific scripts to load and run the modules.

    I'm lazy and find that I get much more sleep when systems are simple and, as much as possible, consistent but independent so that the scope of problems and changes are small. This isn't to suggest that common solutions aren't advantageous. I use standardized system builds (automated installs and system cloning) almost exclusively for the 'platform' and common software and only install unique software on a case by case basis. Diversity isn't always your enemy but I tolerate it only where there are specific justifications.

    This isn't to suggest that a shared resource of perl executeables and libraries couldn't be achieved or that for people smarter than me that it might not be easy and manageable.

Re: Leveraging centralised shared perl version?
by topher (Scribe) on May 10, 2013 at 19:35 UTC

    What if you were to use #!/usr/bin/env perl as your shebang line. That will pick the appropriate Perl based on your $PATH configuration. You can then just let things go if you don't care which available Perl gets used, or tweak your $PATH to force the usage of a specific Perl installation without having to (futher) change scripts.

    Christopher Cashell

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://1032511]
Approved by talexb
Front-paged by Corion
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others lurking in the Monastery: (9)
As of 2018-03-20 08:57 GMT
Find Nodes?
    Voting Booth?
    When I think of a mole I think of:

    Results (248 votes). Check out past polls.