Beefy Boxes and Bandwidth Generously Provided by pair Networks
Clear questions and runnable code
get the best and fastest answer
 
PerlMonks  

pulling duplicate data from a complex structure

by tcf03 (Deacon)
on Jan 23, 2006 at 18:22 UTC ( [id://524998]=perlquestion: print w/replies, xml ) Need Help??

tcf03 has asked for the wisdom of the Perl Monks concerning the following question:

I have the following code
push ( @{$map{$$row[1]}{$$row[2]}}, $$row[0] ) unless $map{$$row[1]}{$ +$row[2]}{$$row[0]}++;
if I pull ou the unless statement -it all works fine - I just end up with duplicate data in my array. Can anyone suggest how to skip the duplicates? Ive tried several different bits of code:
push ( @{$map{$$row[1]}{$$row[2]}}, $$row[0] ) unless ${$map{$$row[1]} +{$$row[2]}{$$row[0]}}++;
etc... and none seem to work.

thanks

UPDATE
I figured it out...
push ( @{$map{$$row[1]}{$$row[2]}}, $$row[0] ) unless grep { /$$row[0] +/ } @{$map{$$row[1]}{$$row[2]}};
Ted
--
"That which we persist in doing becomes easier, not that the task itself has become easier, but that our ability to perform it has improved."
  --Ralph Waldo Emerson

Replies are listed 'Best First'.
Re: pulling duplicate data from a complex structure
by injunjoel (Priest) on Jan 23, 2006 at 18:42 UTC
    Greetings,
    Though I would suggest strongly that you consider using a hash, here is something you can do after your pushes to de-dup your data.
    Leave out the unless, since that works fine, and run the following after.
    my @non_duplicates = do{ #lets use a hash since we want unique values #and localize it so its a fresh copy local %_; #undef the values of this localized hash #while assigning your values as keys undef @_{@{$map{$$row[1]}{$$row[2]}}}; #call keys on the hash to get a list of your #unique values keys %_; } #assign our unique list back to your original structure. $map{$$row[1]}{$$row[2]} = \@non_duplicates;
    Hope that helps

    -InjunJoel
    "I do not feel obliged to believe that the same God who endowed us with sense, reason and intellect has intended us to forego their use." -Galileo
Re: pulling duplicate data from a complex structure
by Fletch (Bishop) on Jan 23, 2006 at 18:44 UTC

    Your update code scans the entire results so far for every new row added ("Can you say 'slow', boys and girls? Good, I knew you could</Fred>). Your first item doesn't work because you're trying to push onto something that isn't an array reference (thanks to your incrementing it). You want a parallel %seen hash which tracks what items you've already pushed.

Re: pulling duplicate data from a complex structure
by planetscape (Chancellor) on Jan 24, 2006 at 06:23 UTC
Re: pulling duplicate data from a complex structure
by NiJo (Friar) on Jan 23, 2006 at 18:44 UTC
    Having to remove duplicates manually raises a red flag. In most cases this means that you should redesign the data structure to use a hash.

    The classic duplicate elimination converts the array to hash to array.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://524998]
Approved by Corion
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others surveying the Monastery: (4)
As of 2024-04-24 01:54 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found