Re (tilly) 1: writing looped programs
by tilly (Archbishop) on Jan 28, 2002 at 23:28 UTC
|
To wait until the beginning of the next minute, you just
sleep(60 - (localtime)[0]);
However given the startup time of a script, the person who was concerned about efficiency shouldn't be.
Furthermore given the fact that a persistent process can easily be forgotten about (or not forgotten about, but not tested) when the time comes to reboot your machine, it is unwise to multiply them needlessly. (This is a classic Unix mistake that I already mentioned today.)
In short, the person who told you to be concerned is giving you bad advice. Unless the cron job is generating an unfortunate number of emails when something goes wrong, I strongly recommend just leaving it as it is. | [reply] [Watch: Dir/Any] [d/l] |
Re: writing looped programs
by count0 (Friar) on Jan 28, 2002 at 23:24 UTC
|
However, I have been told that this is very inefficient and that it would be better to load it into memory and run it as a loop.
I'd just like to post my disagreement with your source ;)
Task scheduling is, and should be IMO, left to the operating system. This is what cron (and Windows task scheduler, and <insert examples from other OSs here>) was made for, and it does a great job of it.
As far as possible inefficiency.. this is just plain untrue. The cron daemon is running on nearly any POSIX system (unless the bofh is in a masochistic mood ;), so there is really no overhead, aside from the invocation of perl at each run.
| [reply] [Watch: Dir/Any] |
Re: writing looped programs
by grep (Monsignor) on Jan 28, 2002 at 23:33 UTC
|
You're trading efficiency for reliability and/or time. Cron will happily make sure your program runs, it logs any problems, and it is the 'standard'.
IOW your small gain in efficiency will cost you time to develop (logging, rewriting for loop, debugging). Then it will cost your company when you leave and someone has to figure out what job is running after he/she has already looked in the crontab.
Is this program causing a problem? Have you done 'ps' or 'top' on it? Is it using too much memory? These are questions you should ask before you take something out of cron.
Don't reinvent good wheels
grep
grep> chown linux:users /world |
| [reply] [Watch: Dir/Any] |
Re: writing looped programs
by Fletch (Bishop) on Jan 28, 2002 at 23:31 UTC
|
And to argue the other way (somewhat), for a grainyness of one minute
you well may be better off with a persistent program. How
much startup overhead there is to your program sould influence
your decision. If it's a 100 line program that just diddles
a logfile or three, you'll probably be better off running
from cron. If it's a 10's-of-thousand line monster that has
to do 45 seconds of initialization before it runs and generates
gobs of network traffic at startup, consider a persistent
daemon.
If you do make it persistent, consider POE which
will give you an easy way to setup getting poked at the
right interval (c.f. POE::Kernel's delay_add
method).
| [reply] [Watch: Dir/Any] |
Re: writing looped programs
by Biker (Priest) on Jan 28, 2002 at 23:59 UTC
|
I guess the performance issue that your source is referring to would be the repetitive load and unload of the Perl interpreter.
On a low-end computer this might make a difference.
But, I'd say it's rare.
As other people have already pointed out, crond in itself is very efficient and very reliable.
If you still decide to go for a 24/7 application and also want your 'event'
to take place at a given second in the minute (not 60 seconds after the last event finished)
then I'd propose you to give a look at
Schedule::ByClock.pm
which does just that.
"Livet är hårt" sa bonden.
"Grymt" sa grisen...
| [reply] [Watch: Dir/Any] |
Re: writing looped programs
by mrbbking (Hermit) on Jan 28, 2002 at 23:26 UTC
|
This'll begin a new loop about 60 seconds after the previous iteration finished. Not sure how exact you need to be in executing "at the beginning of every new minute" but this is the easiest thing I can think of.
Anyone know I'm actually doing myself a favor by sleeping at the top of the loop rather than at the bottom? I'm thinking it'll save memory that way - sleep before the variables are set, rather than after they're used.
while(1){
sleep 60;
# do stuff;
}
| [reply] [Watch: Dir/Any] [d/l] |
|
I'm thinking it'll save memory that way - sleep before the variables are set, rather than after they're used.
I don't believe this will prove beneficial. Perl implements garbage collection, and in most cases will not free() that memory anyhow. So once the variable goes out of scope, its memory is flagged to be reused, iirc.
| [reply] [Watch: Dir/Any] |
|
So once the variable goes out of scope, its memory is flagged to be reused, iirc.
Yeah - that's what I was thinking.
If the sleep is at the top of the loop, before any my variables have been declared, then the variables can't be taking up any memory.
If the sleep is at the end of the loop, then all my variables declared in the loop are still in scope, and therefore must continue to exist.
...right?
| [reply] [Watch: Dir/Any] |
|
Re: writing looped programs
by lemming (Priest) on Jan 28, 2002 at 23:51 UTC
|
In most cases, I'm going to have to go with everyone elses views on this except I do have one question:
Can this program collide with itself?
You say it reads and writes from the disk. If it takes
longer than a minute can it get in a state of confusion?
Overwriting files, doing one thing because of a state the program can cause are just a couple problems that can result.
If you don't have to worry about any such problem, stick with the current solution. If you do, you probably still want to stick with cron and just put up some intelligent checks.
| [reply] [Watch: Dir/Any] |
Re: writing looped programs
by belg4mit (Prior) on Jan 29, 2002 at 04:24 UTC
|
| [reply] [Watch: Dir/Any] |
Re: writing looped programs
by thor (Priest) on Jan 29, 2002 at 09:49 UTC
|
In the spirit of TMOTOWTDI, you could, in your perl script have whatever you are trying to do be done within the context of a signal handler. For example,
#!/bin/perl
$SIG{INT} = \&my_sub;
sub my_sub
{
$SIG{INT} = \&my_sub;
#do stuff
}
while (1)
{
#waiting for my signal...
}
Then, put an entry in the crontab to send your process a SIGINT every minute (kill -INT (your pid here)). You may even be able to put the entry in the crontab through the script (I would try to figure it out myself, but I'm a bit sleepy right now). So, now with the signal handler installed, your script will "do stuff" only when it catches a <code>SIGINT<code>, which you've aranged to be once a minute. You also get the benefit of having your process be persistent, so that the loading of it into memory isn't an issue if it ever was to begin with.
thor
| [reply] [Watch: Dir/Any] [d/l] [select] |
|
Generally speaking, you should try to make any signal handler as quick as ever possible.
Current versions of Perl are not threadsafe, IIRC primarily because perl itself makes use of libraries that are not threadsafe.
The potential problem is that if your long sub, executed by your signal handler, takes a lot of time then the signal handler may be called a second time. If this happens while perl is executing a system call that is not threadsafe you will get a core dump.
"Livet är hårt" sa bonden.
"Grymt" sa grisen...
| [reply] [Watch: Dir/Any] |
|
Of course, you could always have it fork off a child process if you expect the sub to run long. All depends on what you're trying to do...
| [reply] [Watch: Dir/Any] [d/l] |