in reply to RE: Re: each or keys?
in thread each or keys?

Only some of those are true.

It is true that foreach over (@array) uses the original array as a list. It is also true that foreach over (1..1_000_000) does not build a list of a million entries. But the file construct most assuredly does slurp a file. Try running the following command-lines interactively to show that:

perl -e '$| = 1; print while <>'
perl -e '$| = 1; print foreach <>'
You will find that the first spits back your lines interactively. The second has to wait to slurp up everything you have to say before it starts printing.

Should you ever need to write a filter for dealing with a large amount of data, be very, very careful to use while instead of foreach!

But a random note. If you check the benchmark, you will find that keys ran faster, 423.73 iterations per second vs 356.01. If memory is not a constraint, then for hashes of this size, foreach is faster!

Replies are listed 'Best First'.
RE: RE (tilly) 3: each or keys?
by extremely (Priest) on Oct 11, 2000 at 03:52 UTC
    Ouch, I was aware of the fix that allows 1..1000000 to work, thus I started my post with "And...", I was, however under the impression that  foreach (<FH>) {} was special vs. foreach (<>) {}.

    Guess I's been spoilt by dem fancy computairs wif all dat memoree. =) And I never tried to use $. before either, THAT would have made the problem clearer...

    #!/usr/bin/perl -w open FH, "</really/huge/file" or die "Eeek, $!\n"; foreach (<FH>) { print "$. - $_"; }

    Ow! Every line has the same line number! =)

    $you = new YOU;
    honk() if $you->love(perl)