Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl-Sensitive Sunglasses
 
PerlMonks  

Re: Removing duplicates lines

by zork42 (Monk)
on Sep 04, 2013 at 19:45 UTC ( #1052429=note: print w/ replies, xml ) Need Help??


in reply to Removing duplicates lines

Assuming you are only concerned with repeated numbers in consecutive lines, then something like this would do (untested):

use strict; #### Always have these lines. use warnings; #### They will save you lots of time debugging use DBI; use Time::localtime; use File::Compare; use XML::Simple; # qw(:strict); use Data::Dumper; my $user = "213256"; my @filesToProcess = glob "/export/home/$user/Tests/Match/dummy2*"; + #### Need ""s. ''s do not do variable interpolation foreach my $file (@filesToProcess) { open(FILE, $file) or die "Can't open `$file': $!"; my $prev_var = -1_000_000; #### set this to a value that wil +l never appear as the numbers you need to check. #### (Obviously the 2-digit numbe +rs must be in the range 0 to 99) while ( my $line = <FILE> ) #### do NOT read entire files int +o memory if they are big. Much better to process them a line at a ti +me { chomp $line; my $var = substr($line, 10, 2); #### Do you mean substr($line, + 9 , 2)? substr() consders the first character to be at offset zer +o if ($var != $prev_var) #### if number is different to nu +mber in previous line, then process it { $prev_var = $var; ... process $line here .... } } close FILE; }

======

Thought #1:

In your example the lines are sorted by the 2-digit number. If that is typical, then at the most you're only going to have to process 100 lines from the huge files.

Is that what you expect?

======

Thought #2:

Q: What are you supposed to do if you get repeated numbers, but in non-consecutive lines like below?
AB000000026JHAHKDFK AB00000003033AFSFAS = "30" line AB000000028JHKHKHKJ AB000000030HJHKH80J = "30" line AB0000000324446KJHK AB000000030LOIKJUJ8 = "30" line


UPDATE:

Replaced:
foreach my $line ( <FILE> )        #### do NOT read entire files into memory if they are big.  Much better to process them a line at a time
with
while ( my $line = <FILE> )        #### do NOT read entire files into memory if they are big.  Much better to process them a line at a time

Embarassing bug that! In my defence the original code had 'foreach' and I probably just missed it.
Had I written the code from scratch I would (I hope!) have used 'while'. I'm still an idiot though! :)

Thanks very much to Not_a_Number for pointing this out!


Comment on Re: Removing duplicates lines
Select or Download Code
Re^2: Removing duplicates lines
by Not_a_Number (Parson) on Sep 04, 2013 at 20:15 UTC

    Small remark.

    foreach my $line ( <FILE> )  #### do NOT read entire files into memory if they are big.

    With foreach, you ARE reading the whole file into memory!

    Use while:

    while ( my $line = <FILE> )

      Doh!

      Thanks for pointing that out!
      I've updated my post to remove my stupidity :)
      ++
Re^2: Removing duplicates lines
by vihar (Acolyte) on Sep 04, 2013 at 20:33 UTC
    Actually as of now, they are 2 digits. But in future it is supposed to expand. I can change my code accordingly. And responding to your second concern, it would never repeat in non-consecutive lines. Thanks for your help!

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1052429]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others making s'mores by the fire in the courtyard of the Monastery: (10)
As of 2014-09-22 12:37 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    How do you remember the number of days in each month?











    Results (191 votes), past polls