I think that you can take advantage on Tie::File and tie your file with an array, then use the standard method from perldoc to remove duplicate elements from an array:
#!/usr/bin/perl
use warnings;
use strict;
use Tie::File;
tie @file,'Tie::File','myfile' or die "Can't tie file: $!";
undef %saw;
@file = grep(!$saw{$_}++, @file);
untie @file;
This is example b) from perldoc -q 'How can I remove duplicate elements from a list or array?' of course it could be less efficient than using a single hash and reading line by line, but I think that Tie::File could give you other ideas about solving your problem, just thought that this might throw some new light on your problem.
note: I did not have the time to test the code that I posted above ( I'm late to work already :P ) so please do a backup of your file before trying this script on it.