Okay. I am commenting here just because I thought of another way to solve this problem. What if you sort the lines before you try to eliminate the duplicates? That way the same lines will fall right next to each other, and you can just skip them by comparing this line to the previous line. If the two are the same, then you can skip that because it's a duplicate. This is a good idea if you don't expect to have a lot of duplicate lines and you plan to sort the output later on. Might as well sort it now and eliminate the duplicates in one step. ;-)
use strict;
use warnings;
my $ff = 'robots.txt';
my $fh;
my @lines;
# Read the entire file and
# store lines in an array
open $fh, "<", $ff or die "Sorry, can't open file - $ff\n";
{
local $/;
@lines = split("\n", <$fh>);
}
close $fh;
# Get rid of duplicate lines
@lines = sort(@lines);
my $L;
my $prev = '';
foreach $L (@lines)
{
print($L . "\n") if ($prev ne $L);
$prev = $L;
}