in reply to find common data in multiple files
G'day mao9856,
I'd read through one file and store all of its data in a hash; then read through the remaining files, removing hash data that wasn't common. Given these files (in the spoiler) using data from your OP:
<Reveal this spoiler or all in this thread>
This code:
#!/usr/bin/env perl use strict; use warnings; use autodie; my @files = glob 'pm_1206312_in*'; my %uniq; { open my $fh, '<', shift @files; while (<$fh>) { my ($k, $v) = split; $uniq{$k} = $v; } } for my $file (@files) { my %data; open my $fh, '<', $file; while (<$fh>) { my ($k, $v) = split; $data{$k} = $v; } for (keys %uniq) { delete $uniq{$_} unless exists $data{$_} and $uniq{$_} eq $dat +a{$_}; } } printf "%s %s\n", $_, $uniq{$_} for sort keys %uniq;
Produces this output:
ID121 ABC14 ID122 EFG87 ID157 TSR11
— Ken
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^2: find common data in multiple files
by mao9856 (Sexton) on Dec 30, 2017 at 08:32 UTC | |
by kcott (Archbishop) on Dec 31, 2017 at 01:51 UTC | |
by mao9856 (Sexton) on Dec 31, 2017 at 06:09 UTC | |
by kcott (Archbishop) on Jan 01, 2018 at 01:01 UTC | |
by poj (Abbot) on Dec 31, 2017 at 09:18 UTC | |
by mao9856 (Sexton) on Jan 01, 2018 at 05:08 UTC | |
|
In Section
Seekers of Perl Wisdom