If I've understood your exposition right,. I've done something similar, counting transactions between pairs of parties. My approach was using a Berkeley DB with the keys being both parties and the value being a pair of count and amount. That was slower than the all-RAM solution but had the nice advantage that it worked. I also think that this should be faster than multiple passes through your large dataset, but I don't know.
If you're doing this in Perl, I'd go for DB_File and store strings in it.
As the order of votes doesn't seem to matter in your data, you can linearly scale your processing time by splitting up your data across machines, at least if you can reduce the results you want to collect to something that's associative and commutative, like the count of items and the sum of items. Of course you will have to be careful when merging your collected data back together - you should code sanity checks that check that the count of records in each chunk is equal to the sum of reported counts, and that the sum over all chunks is equal to the total number of records.