Beefy Boxes and Bandwidth Generously Provided by pair Networks
Do you know where your variables are?

Re^2: Cross platform coding advice

by Tanktalus (Canon)
on Oct 09, 2005 at 01:34 UTC ( [id://498485]=note: print w/replies, xml ) Need Help??

in reply to Re: Cross platform coding advice
in thread Cross platform coding advice

Just as a side note, but the first line is something that I'm trying very hard to always build into the script. That is, I would rather set up via Makefile.PL or Build.PL, and let them set the shebang line up appropriately. I admit to not working out all the kinks to the point where I can do this all the time, but it may be something to think about.

One of the advantages is that you start thinking of your application as something to be distributed. You also will end up in an environment where a source-code control system (SCCS) and/or version control system (VCS) will make sense, and will be easy to use. Of course, on Linux, cvs fulfills that easily since it's usually built-in. On Windows, cvs is supposedly easy to set up (SourceForge has instructions on how to do this when accessing their cvs servers).

Another advantage is simply that you end up with a "make dist" or "./Build dist" command that wraps everything up for you.

Replies are listed 'Best First'.
Re^3: Cross platform coding advice
by schweini (Friar) on Oct 09, 2005 at 02:02 UTC
    point taken - although the particular software i am developing gets distributed via rsync to the other servers.
    speaking of CVS - is there a transparent filesystem-to-cvs thingy somewhere out there? i have gotten very used to my 'CTRL+S, perl -c, FTP, firefox-refresh, cp to production-code-dir' devel-cycle, yet would love to have versioning and all that, and some kind of CSV-filesystem-driver for windows and for linux would be great for that, i think...

      What I do for one of my projects is develop on my local webserver, cvs checkin, and then ssh to the server and run /srv/cvs/project/CVS/install_server - which in turn checks the current user, runs itself under sudo (which is set up to allow any user to run this script as a particular non-root user without a password), and then extracts itself to the real webserver using a cvs update command.

      An alternative is to make dist (or ./Build dist), and have some sort of private (and properly authenticated/authorised) page where you can just upload the distribution as a tar.gz file, and the CGI script can take it, uncompress it, unarchive it, build && build test && build install it, possibly followed by an rsync.

      Basically, pretty much every step you're doing to get it into production (or into test, for that matter) should be automatable down to a single command. Generally speaking, I find this type of extra work to set up completely invaluable. Think "Lazy". Lots of work so you can be lazy. I started doing this type of thing not because I'm lazy, but because I'm error-prone. I hated typing in things and getting them wrong. But, if I taught the computer how to do my work for me, it was less likely to screw it up - once I successfully taught it in the first place.

Log In?

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://498485]
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others perusing the Monastery: (2)
As of 2024-05-26 12:43 GMT
Find Nodes?
    Voting Booth?

    No recent polls found