relaxed137 has asked for the wisdom of the Perl Monks concerning the following question:

For some reporting, I need to parse a 500k text file. In this text file, I need to pull out the 10th field (sorted by the character | ) and then count the quantity of items in that field.

Here's my current code for doing this.

perl -n -a -F\\\| -e'END { print "$v\t$k\n" while($k,$v)=each%h} $h{$F +[9]}++' FILENAME
Output example:

( ... )

My most recent parse of this file came up with 350 unique IP addresses with anywhere from 1 to 57000 occurances.
The time to do this parse on a server I use was:

real 19m12.03s
user 17m38.42s
sys 0m7.87s

So, my question is - Can anyone out there help me make this even faster?
I know that I'm at the mercy of the CPU cycles, etc, but maybe there *is* a way to cut a few minutes off of this report.

As a note, I cannot use "cut -f10 -d| | sort | uniq -c" because the file is too big for sort to handle.