Beefy Boxes and Bandwidth Generously Provided by pair Networks
No such thing as a small change
 
PerlMonks  

unix loops in perl

by a217 (Novice)
on Oct 26, 2011 at 05:25 UTC ( [id://933785]=perlquestion: print w/replies, xml ) Need Help??

a217 has asked for the wisdom of the Perl Monks concerning the following question:

I am trying to create a pipeline using perl script to make calls to various code. The problem I'm facing is that the initial program splits a large data file into separate smaller files (separated by chromosome) so that numerous files are produced (e.g. chr1.out, chr2.out, chr3.out ...). When I was not restricted to the pipeline, I used unix for-loops to cycle through each respective output file in such manner:

for i in {1..24} do perl /home/perl/chr.match.pl chr$i.nonCG.out echo "chr$i" date done

What I am trying to convey here is that each filename is nearly the same except for the specific chr number for each (which is why I cycle through from chr1 to chr24)

However, within the perl script I am having a hard time trying to get the correct language to cycle through each chromosome. I have been using the system() command in perl which allows me to execute separate perl code in the server using the pipeline. I have tried:

system("for i in {1..24} do perl /home/perl/chr.match.pl chr$i.nonCG.out echo "chr$i" date done");

which does not seem to work, and I've also tried:

`for i in {1..24} do perl /home/perl/chr.match.pl chr$i.nonCG.out echo "chr$i" date done`;

because I know using the ` character allows you to enter unix commands. I know I am approaching this the wrong way but I'm having a hard time trying to find a solution. I have already tried to catenate the separated output files and then run the pipeline on one large file, but this method will create problems down the road for me. I've also included a sample to test if necessary:

pipe.test.pl

#!/usr/bin/perl -w # pipe.test.pl use strict; use warnings; open(IN, "<$ARGV[0]") or die "error reading file"; open(OUT, ">$ARGV[1]") or die "error reading file"; while (my $line = <IN>) { chomp($line); my @split = split("\t", $line); if ($split[5] == 1) { print OUT "$line\n"; } } close IN; close OUT;

chr1.in.txt

chr1 100 159 104 104 1 0.05 + chr1 100 159 145 145 0 0.04 + chr1 200 260 205 205 1 0.12 + chr1 500 750 600 600 1 0.09 + chr1 800 900 600 600 1 0.09 +

chr2.in.txt

chr2 100 200 105 105 1 0.03 + chr2 100 200 110 110 1 0.08 + chr2 300 400 350 350 0 0 +

As you can see its a very simple test, and to run it I use the command: "perl pipe.test.pl chr1.in.txt chr1.out.txt" for chr1. Essentially, the goal here is to be able to execute pipe.test.pl from a separate perl script using for-loops to cycle through each chromosome. I hope I am coming across clear to everybody.

Replies are listed 'Best First'.
Re: unix loops in perl
by moritz (Cardinal) on Oct 26, 2011 at 05:44 UTC

    Simply use a loop in Perl:

    for my $c (1..24) { my $filename = "chr$c"; open my $IN, '<', $filename or die "Cannot open '$filename' for re +ading: $!"; open my $OUT, '<', "$filename.out" or die "Cannot open '$filename. +out' for writing: $!"; while (my $line = <$IN>) { ... } close $IN; close $OUT; }

      Thank you. I tried using perl loops before, but I must have gotten the syntax wrong so I just stopped trying. But I knew it was just a simple problem I was overlooking.

Re: unix loops in perl
by ikegami (Patriarch) on Oct 26, 2011 at 06:28 UTC

    Your approach is failing because $i is being interpolated by Perl; the shell never sees it.

    use strict; would have caught it. Well, unless your Perl script also had a variable named $i in scope. Use better var names!

Re: unix loops in perl
by i5513 (Pilgrim) on Oct 26, 2011 at 14:18 UTC
    Could be interesting to use pdsh with exec module:
    pdsh -w [1-24] -Rexec perl /home/perl/chr.match.pl chr%h.nonCG.out
    It will do for you in parallel, so less time is needed (sure in cpan there are modules to make it in perl :) )
    Regards,
      It will do for you in parallel, so less time is needed

      Without further proof or explanation, I'd assume this statement to be generally false.

        Ok, I should say 'probabbly' less time is needed

        If your proccess doesn't take many resources (cpu/network/memory), then probably it is true, isn't it?

        So you have 30 process running at same time in an computer which is perfectly capable to do its work, the it will be faster than running 30 process one by one in a bucle.

        If you have an algorithm wich consume all (or near to) cpu / memory / network, then it will not be fastest but slower

        Do you agree now ?

        Regards,

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://933785]
Approved by GrandFather
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others examining the Monastery: (8)
As of 2024-04-25 11:34 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found