Beefy Boxes and Bandwidth Generously Provided by pair Networks
P is for Practical
 
PerlMonks  

Re: Delete duplicate data in file

by mulander (Monk)
on Nov 21, 2005 at 07:07 UTC ( [id://510382]=note: print w/replies, xml ) Need Help??


in reply to Delete duplicate data in file

I think that you can take advantage on Tie::File and tie your file with an array, then use the standard method from perldoc to remove duplicate elements from an array:
#!/usr/bin/perl use warnings; use strict; use Tie::File; tie @file,'Tie::File','myfile' or die "Can't tie file: $!"; undef %saw; @file = grep(!$saw{$_}++, @file); untie @file;
This is example b) from perldoc -q 'How can I remove duplicate elements from a list or array?' of course it could be less efficient than using a single hash and reading line by line, but I think that Tie::File could give you other ideas about solving your problem, just thought that this might throw some new light on your problem.

note: I did not have the time to test the code that I posted above ( I'm late to work already :P ) so please do a backup of your file before trying this script on it.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://510382]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others examining the Monastery: (4)
As of 2024-04-23 22:41 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found