in reply to Scan ARP cache dump - memory hog

Currently you're slurping the entire data set into memory at once; not only that, you're copying possibly huge chunks of it several more times. If you can build your code around a while() loop, and process each line at a time instead of slurping the entire file, you'd be much better off, memory-wise.

# instead of this: my @lines = <DATA>; # do something like this: while my $line (<DATA>) { ... }
Even if you only build @matches in that loop and keep the rest of the code the same, you may be much better off (assuming you have few matches compared to the size of the dataset). Deleting arrays after you're done with them (use my and arrange the code so they go out of lexical scope) will also help with memory reuse.

If you can more clearly explain what this code is supposed to do, we might be able to find a much more straightforward solution. As it is, the code seems to be doing the same thing over again several times in different ways before printing its final results.


Replies are listed 'Best First'.
Re: Re: Scan ARP cache dump - memory hog
by seanbo (Chaplain) on Jul 04, 2002 at 02:52 UTC
    Just to further explain what I am tryint to achieve. We currently manage acouple hundred subnets at my job. Each Monday morning, we get ARP cache dumps from all of our routers sent to us. People send us requests for IP addresses and DNS names. There are network admins that are notorious for not returning IP addresses.

    What I am trying to do, is take the last few months worth of ARP information (that is the tail -250000... command. It's jsut an approximation). We use that to determine which IPs have had no activity for a while and we remove the allocation from our records and notify the admin that we had it assigned to that we have reclaimed the address.

    Thanks for the input!

    perl -e 'print reverse qw/o b n a e s/;'