Beefy Boxes and Bandwidth Generously Provided by pair Networks
Clear questions and runnable code
get the best and fastest answer
 
PerlMonks  

Comparing a list to a tab delimited text file

by Azaghal (Initiate)
on Jan 11, 2018 at 17:18 UTC ( #1207122=perlquestion: print w/replies, xml ) Need Help??
Azaghal has asked for the wisdom of the Perl Monks concerning the following question:

Hi Monks,
I want to check if a word from a very large list of words exists in the first column of a (very large again) tab delimited text file, and if so to use some of the other columns from the latter to perform some actions on the former.
I got it to work, and then optimized my code the best I could to make it quicker. Still, I'm pretty sure it could be done in a more efficient way that I can't think of, as there is a bottleneck when looping through the tab delimited text file.

Here is the function I use to go through it and check if the word exists in it :
sub lexique { foreach (@lexiques) { my @lexiks = split/\t/, $_; chomp @lexiks; my $motlexique = $lexiks[0]; my $genre = $lexiks[4]; my $nombre = $lexiks[5]; if ($motlexique eq $thewordimsearching) { #Do some things here last; } } }
Could you guide me a bit here ? Please keep in mind that I'm self-taught and not-so experienced with Perl.

Replies are listed 'Best First'.
Re: Comparing a list to a tab delimited text file
by kcott (Chancellor) on Jan 12, 2018 at 00:11 UTC

    G'day Azaghal,

    Welcome to the Monastery.

    In this sort of situation, it's best to advise us of both the size of the data and the size of available memory. You'll get very different answers for a 100Mb file and 8Gb memory vs. 8Gb file and 100Mb memory.

    It looks like you've slurped your entire "tab delimited text file" (TSV) into what appears to be a global array, @lexiques. You are then processing every TSV record every time you do a search (for another global variable, $thewordimsearching). Using global variables is fraught with problems and should be avoided wherever possible. Processing the entire TSV repeatedly for every search is a very poor choice.

    When working with tab- (or comma-, or pipe-, or whatever-) separated data, reach for Text::CSV in the first instance. This module is very easy to use and has been written specifically to handle this type of data. Except for maybe an academic exercise, this is not a wheel you should consider reinventing. If you also have Text::CSV_XS installed, it will run faster.

    In the example code below, I show how to capture the TSV data once and then use it as many times as necessary. If the script is to be run multiple times, you might want to consider serialising the hash data using something like the builtin Storable module. If the TSV data exceeds your memory capacity, you could store the processed data in a database. These are just a couple of suggestions: you haven't supplied sufficient information about the data, your environment, or the intended usage, to do anything beyond making tentative guesses as to how you should best proceed.

    Here's the dummy TSV file I used for my testing:

    $ cat pm_1207122_data.tsv A A1 A2 A3 A4 A5 A6 B B1 B2 B3 B4 B5 B6 C C1 C2 C3 C4 C5 C6 D D1 D2 D3 D4 D5 D6 E E1 E2 E3 E4 E5 E6 F F1 F2 F3 F4 F5 F6

    In the following example script, &initialise is run once to capture the TSV data, and &search is run as many times as you want. Note how the arguments are passed to the subroutines, including the use of references (\%tsv_data) so that only a single scalar is passed instead of a huge data structure. Also note the limited scope of @words, $tsv_file, and %tsv_data: they cannot be accessed directly outside of the anonymous block in which they are declared.

    #!/usr/bin/env perl -l use strict; use warnings; use autodie; use Text::CSV; { my @words = 'A' .. 'I'; my $tsv_file = 'pm_1207122_data.tsv'; my %tsv_data; initialise($tsv_file, \%tsv_data); search($words[rand @words], \%tsv_data) for 1 .. 5; } sub initialise { my ($file, $data) = @_; open my $fh, '<', $file; my $csv = Text::CSV::->new({sep_char => "\t"}); while (my $row = $csv->getline($fh)) { $data->{$row->[0]} = [@$row[1..$#$row]]; } } sub search { my ($find, $data) = @_; print "$find: ", exists $data->{$find} ? "@{$data->{$find}}[3,4]" : '<not fou +nd>'; }

    Here's the results of a couple of sample runs:

    C: C4 C5 G: <not found> E: E4 E5 D: D4 D5 D: D4 D5
    A: A4 A5 I: <not found> F: F4 F5 H: <not found> E: E4 E5

    — Ken

Re: Comparing a list to a tab delimited text file
by poj (Monsignor) on Jan 11, 2018 at 18:01 UTC

    As Laurent_R said, store the words into a hash for a fast look-up

    #!/usr/bin/perl use strict; my $t0 = time; # build dictionary my $wordfile = 'words1.txt'; my %dict = (); my $count = 0; open my $in,'<',$wordfile or die "Could not open $wordfile : $!"; while (<$in>){ chomp; my ($motlexique,@cols) = split /\t/,$_; $dict{$motlexique} = \@cols; ++$count; } close $in; print "$count lines read from $wordfile\n"; # scan text file $count = 0; my $textfile = 'text1.txt'; open my $in,'<',$textfile or die "Could not open $textfile : $!"; while (<$in>){ chomp; my ($searchword,@cols) = split /\t/,$_; if (exists $dict{$searchword}){ my $genre = $dict{$searchword}[3]; my $nombre = $dict{$searchword}[4]; #Do some things here print "Matched '$searchword' to $genre $nombre\n" } ++$count; } close $in; my $dur = time - $t0; print "$count lines read from $textfile in $dur seconds\n";
    poj
      Azaghal,

      A hash is a good solution.

      An optimisation you can do is NOT to create or update genre and nombre variables UNLESS a match was found. In the above solution, these two variables are created when if( exists ...) succeeds.

Re: Comparing a list to a tab delimited text file
by Laurent_R (Canon) on Jan 11, 2018 at 17:54 UTC
    It depends how large exactly a "very large file" is.

    The typical way to do this type of work is to read your tab delimited text file and to store words you're looking for in a hash, and then to look for the word in the hash. The reason this is the good way to do that is that hash lookup is extremely fast (and does not depend on the size of the hash). Now, of course, this works only if the hash does not grow too large to fit in memory. So please provide more information about the size of your files and what you would need to store in addition to the key.

    It would be good if you could provide a small extract of both files, showing cases matching your searches.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://1207122]
Approved by Corion
Front-paged by Corion
help
Chatterbox?
What's the matter? Cat got your tongue?...

How do I use this? | Other CB clients
Other Users?
Others about the Monastery: (5)
As of 2018-02-24 10:49 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    When it is dark outside I am happiest to see ...














    Results (310 votes). Check out past polls.

    Notices?