Beefy Boxes and Bandwidth Generously Provided by pair Networks
There's more than one way to do things

The Monastery Gates

( #131=superdoc: print w/ replies, xml ) Need Help??

Donations gladly accepted

If you're new here please read PerlMonks FAQ
and Create a new user.

New Questions
Lagrange Polynomials in Perl
3 direct replies — Read more / Contribute
by Mascasc
on Apr 28, 2015 at 15:35

    This question is sufficiently answered, I'm just an idiot that forgets to use references

    I wrote a little module that I could use to approximate lagrange polynomials in Perl.

    The only problem is, it doesn't work. I consistantly get a single value whenever I feed some test function into Lagrange's maw.

    Any help would be appreciated, code below:

    #! /usr/bin/perl use strict; use warnings; sub lagrange { #x is an array of x values, f is an array of function values, LC are t +he reduced lagrange coefficients. my @x = shift; my @f = shift; my ($i, $j); my @LC; my $point = @x; my $den = 1; for ($i=0; $i < $point; $i++) { for ($j = 0; $j < $point; $j++) { unless ($j == $i) { $den *= ($x[$i] - $x[$j]); } $LC[$i] = $f[$i]/$den; $den = 1; } } return \@LC; } sub lageval { my @x = shift; my @LC = shift; my $x0 = shift; my $point = @x; my @mult; my $sum = 0; my ($i, $j); for($i=0; $i<$point; $i++) { $mult[$i] = 1; } for ($i=0; $i<$point;$i++) { for ($j = 0; $j<$point; $j++) { unless ($j == $i) { $mult[$i] *= $x0 - $x[$j]; } } $sum += $mult[$i] * $LC[$i]; } return $sum; } 1;
Inherit methods from IO::File and IO::Uncompress depending on file extension
2 direct replies — Read more / Contribute
by jnoirel
on Apr 28, 2015 at 07:07

    Hi there,

    I'm trying to develop a package DATASTORE used like this

    $D = new DATASTORE 'dir', 'filename'; while($_ = $D->getline) { [...] }

    The added benefit of using DATASTORE (in case you're wondering) is that it does some work towards automatically locating the data files I want based on some local config. I'd like the $D variable to inherit all the methods from IO::File.

    package DATASTORE; use parent 'IO::File'; sub new { my ($class, @args) = @_; my $filename; [...] $self = $class->SUPER::new($filename, 'r'); return $self if defined $self; }

    That seems to work for me. Now what I'm not sure about is how you'd adapt the code so that depending on the extension, it uses IO::File OR IO::Uncompress::Gunzip (if the extension is GZ).

    Would this be something that's easy to do?


Is tie inteded to be used in this way?
1 direct reply — Read more / Contribute
by Discipulus
on Apr 28, 2015 at 07:01
    suppose i have an array and i wont to trigger some action when some of the element are modified, but not when they are accessed.
    i discovered the tie ability of Perl and seems appropriate:

    Is intended to be used in this way?
    #!perl use strict; use warnings; {#bare block because package BLOCK appeared only 5.14 package Arraytrigger; use Tie::Array; use vars qw(@ISA); @ISA = ('Tie::StdArray'); sub STORE { &main::trigger; my $self = shift; $self->SUPER::STORE($self, @_); } sub CLEAR { &main::trigger; my $self = shift; $self->SUPER::CLEAR ($self, @_); } sub PUSH { &main::trigger; my $self = shift; $self->SUPER::PUSH($self, @_); } sub POP { &main::trigger; my $self = shift; $self->SUPER::POP($self, @_); } sub SHIFT { &main::trigger; my $self = shift; $self->SUPER::SHIFT($self, @_); } sub UNSHIFT { &main::trigger; my $self = shift; $self->SUPER::UNSHIFT($self, @_); } } #package main.. sub trigger{print "Triggered [@_]\n"} tie my @arr, 'Arraytrigger'; print "\tSetting list:\n" and @arr = qw(a b c d e); print "\tSetting one:\n" and $arr[0]=0; print "\tpushing:\n" and push @arr,'f'; print "\tpopping:\n" and pop @arr; print "\tshifting:\n" and shift @arr; print "\tunshifting:\n" and unshift @arr, 'zero'; print "\tclearing:\n" and @arr=();#undef @arr and @arr=undef see +m no good..

    Or can i avoid all the repeted code in this other way?

    There are no rules, there are no thumbs..
    Reinvent the wheel, then learn The Wheel; may be one day you reinvent one of THE WHEELS.
Ubuntu 14.04.1 & Net::SSLGlue::POP3
1 direct reply — Read more / Contribute
by JimSi
on Apr 27, 2015 at 12:53

    I have a problem compiling or installing Net::SSLGlue / Net::SSLGlue::POP3 package on newest Ubuntu 14.04.1. Never had any problem with it on CentOS and the script works right there. On Ubuntu - if I install from package (apt) - it cannot run the script, and gave me following error:

    $ ./t.perl Subroutine Net::POP3::starttls redefined at /usr/share/perl5/Net/SSLGl +ue/ line 13. Subroutine Net::POP3::_STLS redefined at /usr/share/perl5/Net/SSLGlue/ line 27. cannot find and replace IO::Socket::INET superclass at /usr/share/perl +5/Net/SSLGlue/ line 93. Compilation failed in require at ./t.perl line 21. BEGIN failed--compilation aborted at ./t.perl line 21.

    when I am trying to install from CPAN:
    PERL_DL_NONLAZY=1 "/usr/bin/perl" "-MExtUtils::Command::MM" "-MTest::H +arness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/l +ib', 'blib/arch')" t/*.t t/01_load.t .. Subroutine Net::SMTP::starttls redefined at /root/.cpan +/build/Net-SSLGlue-1.053-fRXdxl/blib/lib/Net/SSLGlue/ line 13. Subroutine Net::SMTP::_STARTTLS redefined at /root/.cpan/build/Net-SSL +Glue-1.053-fRXdxl/blib/lib/Net/SSLGlue/ line 30. t/01_load.t .. Failed 1/3 subtests Test Summary Report ------------------- t/01_load.t (Wstat: 0 Tests: 3 Failed: 1) Failed test: 1 Files=1, Tests=3, 1 wallclock secs ( 0.02 usr 0.00 sys + 0.10 cusr + 0.00 csys = 0.12 CPU) Result: FAIL Failed 1/1 test programs. 1/3 subtests failed. make: *** [test_dynamic] Error 255

    The script fails to load "Net::SSLGlue::POP3" module.
    Please help
Efficient Automation
4 direct replies — Read more / Contribute
by jmneedhamco
on Apr 27, 2015 at 11:29

    I am working on a script to automate some process checking and basically want the script to check processes and then if one is not running, start it.

    There are several instances of Apache for example that are running for various groups in our company. So I envision a for/each loop. This loop would check each process in turn and then if the ps command returns FAILURE, then it would launch that start command associated with that process.

    The question here is: Would the best approach be a couple of arrays with the cmds in them? The way to look at each process on this box is the same save the item we are grepping for.

    Help to do this in most efficient way is appreciated.

perl module help
3 direct replies — Read more / Contribute
by janasec
on Apr 26, 2015 at 13:40

    hi I am learning to write automation for running some tests,to begin with I have written the following code.I need to know how I can create a perl module so I can make calls to check if a host is alive

    #!/usr/bin/perl use strict; use warnings; use lib '/home/suse/junk/automation1/emulex'; use Config::Simple; use Net::Ping::External qw(ping); use 5.010; #myconf.cfg is a file with all hosts my $cfg = new Config::Simple('/home/suse/junk/automation1/emulex/mycon +f.cfg'); #accessing values my $host = $cfg->param("host1"); print "checking $host is reachable or not\n"; my $alive = ping(hostname => "$host", count => 5, size => 1024, timeou +t => 3); print "$host is alive!\n" if $alive or die"Could not ping host '$host' + ";
A better way of lookup?
10 direct replies — Read more / Contribute
by BrowserUk
on Apr 26, 2015 at 07:50

    This has been a recurring dilemma down the years.

    Given a contiguous input and a set of break points, find the highest breakpoint lower than the input value and return the associated value.

    sub lookup { my( $v ) = shift; if( $v < 25000 ) return 2500; if( $v < 50000 ) return 5000; if( $v < 150000 ) return 12500; if( $v < 225000 ) return 25000; if( $v < 300000 ) return 37500; if( $v < 600000 ) return 60000; if( $v < 1200000 ) return 120000; if( $v < 3600000 ) return 300000; if( $v < 5400000 ) return 600000; if( $v < 10800000 ) return 900000; if( $v < 21600000 ) return 1800000; if( $v < 43200000 ) return 3600000; if( $v < 64800000 ) return 7200000; if( $v < 129600000 ) return 10800000; if( $v < 216000000 ) return 21600000; if( $v < 432000000 ) return 43200000; if( $v < 864000000 ) return 86400000; if( $v < 1728000000 ) return 172800000; if( $v < 3024000000 ) return 345600000; if( $v < 6048000000 ) return 604800000; if( $v < 12096000000 ) return 1209600000; if( $v < 31557600000 ) return 2629800000; if( $v < 63115200000 ) return 5259600000; if( $v < 78894000000 ) return 7889400000; if( $v < 157788000000 ) return 15778800000; return 31557600000; }

    Simple. Efficient. Not very pretty. Is there a better way?

    • I could stick the values in hash, iterate the keys and return the value:

      but that requires either sorting the keys each time or keeping a sorted array of the keys and duplicating memory.

    • I could put the break points and values in parallel arrays;

      but ... parallel arrays?

    • I could use an array of pairs (AoA):

      But looping over the double indirection isn't particularly efficient.

    Then there's the search method. Most time the set isn't big enough to warrant coding a binary search in place of a linear one. Most times efficiency isn't a particular concern, but in this case, the routine is called as part of a redraw function when rotating stuff on screen, so it can be called many times a second.

    Basically, there are several ways of doing it, but none of them are particularly satisfying, and I'm wondering if anyone has discovered a nice way that I haven't covered?

    (The final twist is that this is destined for JavaScript; so if any JS guys know of a good method that language supports; I'd be happy to hear of it. Perhaps off-line.)

    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
    In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked
Net::SSH2 channel returns no output
1 direct reply — Read more / Contribute
by rama133101
on Apr 25, 2015 at 16:08

    I need to send some commands to remote linux box and fetch responses. I am using Net:SSH2 for creating a session channel and send commands using shell.

    The problem I face is that the response I receive is empty. If I add a sleep statement, the output is very well captured. I do not want to add sleep statements as I need to send multiple commands and that would reduce the performance. Please advice what I am missing.

    Here is the code.

    $session = Net::SSH2->new(); $rc = $session->connect($target_ip, $target_port, Timeout=>4000) ; print "\n rc: $rc"; $session->auth_password($username, $passwd) ; $chan = $session->channel(); $chan->shell() ; print $chan $cmd . " \n" ; my @poll = ({handle=>$chan, events=>['in', 'ext', 'channel_closed']}); $session->blocking(0) ; $session->poll(1, \@poll) ; if ($poll[0]->{revents}->{in}) { while (<$chan>) { $resp .= $_ ; } } print "\nresponse : $resp";

    Updates: We found out the issue here.

    The problem is, once we receive the cmd output, the EOF is not reaching and so the channel is not getting closed automatically.

    Question: How will SSH channel know that the cmd it sent has returned the complete output and when should it close its channel?

Wait for individual sub processes [SOLVED]
7 direct replies — Read more / Contribute
by crackerjack.tej
on Apr 25, 2015 at 03:01

    Dear monks,

    I am essentially writing a Perl script that divides a large input file for a text processing tool, so that I can process the files faster. I am working on a CentOS 6 based cluster, where each CPU has 16 cores. My idea is to split the input file into 16 parts, and run 16 instances of the text processing tool, and once all of them are done, I parse the output and merge it into a single file. In addition, the script will continue to process the next input file in a similar way. I have achieved that using fork(), wait() and exec() as follows (Omitting code that is not relevant):

    use strict; use warnings; use POSIX ":sys_wait_h"; #Split input files into parts and store the filenames into array @ +parts ... my %children; foreach my $part (@parts) { my $pid = fork(); die "Cannot fork for $part\n" unless defined $pid; if ($pid == 0) { exec("sh text_tool $part > $part.out") or die "Cannot exec + $part\n"; } print STDERR "Started processing $part with $pid at ".localtim +e."\n"; $children{$pid} = $part; } while(%children) { my $pid = wait(); die "$!\n" if $pid < 1; my $part = delete($children{$pid}); print STDERR "Finished processing $part at ".localtime."\n"; }

    While I got what I wanted, there is a small problem. Due to the nature of the text processing tool, some parts get completed much before others, in no specific order. The difference is in hours, which means that many cores of the CPU are idle for a long time, just waiting for few parts to finish.

    This is where I need help. I want to keep checking which part (or corresponding process) has exited successfully, so that I can start the processing of the same part of the next input file. I need your wisdom on how I can achieve this. I tried searching a lot on various forums, but did not understand correctly how this can be done.



    Using a hash, I can now find out which process is exiting when. But I fail to understand how to use this code in an if block, so that I can start the next process. Can someone help me with that? I have updated the code accordingly.

    ----------------UPDATE 2--------------

    I guess it's working now. Using Parallel::ForkManager, and a hash of arrays that stores the pids of each input file, I am able to track the sub processes of each file separately. By maintaining a count of number of subprocesses exited, I can call the sub for output parsing as soon as the count reaches 16 for an input file. I will come back if I run into any other problem.

    Thanks a lot for all the help :)

    P.S. Is there any flag that I have to set that this thread is answered/solved?

Out of Memory Error : V-Lookup on Large Sized TEXT File
7 direct replies — Read more / Contribute
by TheFarsicle
on Apr 24, 2015 at 09:14
    Hello perlmonks,

    I am newbie to Perl & working on the Perl script to perform an action similar to V-Lookup.


    As an input I have some large sized text files around 200 MB. These text files are to be searched for all the records present in the another file, say Reference.txt (This file is normally not more than one MB)

    I have written script to find all the lines present in these large sized files based on text (string values) in Reference.txt file. All the found records are then written into a new file per each large file iteration.

    The script works fine for normal size like 30-40 MB but when it goes more than 100 MB or so. It throws out of memory error.

    I have designed these operations as subroutine and calling them.

    The code goes something like this...

    open (FILE, $ReferenceFilePath) or die "Can't open file"; chomp (@REFFILELIST = (<FILE>)); open OUTFILE, ">$OUTPUTFILE" or die $!; foreach my $line (@REFFILELIST) { open (LARGEFILE, $LARGESIZEDFILE) or die "Can't open File"; while (<LARGEFILE>) { my $Result = index($_, $line); if ($Result > 0) { open(my $FDH, ">>$OUTPUTFILE"); print $FDH $_; } } close(LARGEFILE); } close(OUTFILE); close(FILE);

    Can you please guide me on where I am going wrong and what would be the best way to address this issue?

    Thanks in advance.


DESTROY and AUTOLOAD in 5.20.1
4 direct replies — Read more / Contribute
by szabgab
on Apr 24, 2015 at 05:36
    Given this script:
    use strict; use warnings; use 5.010; use Greeting; say 'Hi'; { my $g = Greeting->new; } say 'Bye';
    and this module:
    package Greeting; use strict; use warnings; use 5.010; use Data::Dumper; sub new { my ($class) = @_; return bless {}, $class; } sub AUTOLOAD { our $AUTOLOAD; say $AUTOLOAD; } DESTROY { say 'destroy'; } 1;
    I can see the word "destroy" printed as I would expect. However, if I remove the DESTROY from the module I don't see AUTOLOAD being called instead of the missing DESTROY. I only checked it with 5.20.1 but I wonder what am I missing here?


    Reported with perlbug as RT #124387
Getting an unknown error
5 direct replies — Read more / Contribute
by andybshaker
on Apr 23, 2015 at 10:02

    Basically, different arrays have different pieces of information in them and I have to go from one to another to another from that. In this case, I have to go from each element in @Genes and extract its corresponding element from a long file which I read in as @lines. I keep getting a strange error that reads, syntax error at line 38, near "$N (" Does anyone know what this is? Here is the code. </p?

    my @Genes = qw(A B C D) my @ptt = ("19384..003059 0 - - A","203581..39502 0 + - B) my @contig = (); my @Coordinates; my @Number; my $R; foreach my $G (@Genes){ for my $x (0..$#ptt){ if($ptt[$x] =~ /$G/){ push(@Coordinates,"$ptt[$x]"); print "$ptt[$x]\n";} } } foreach my $C (@Coordinates){ push (@Number, split(" ", $C));} my %hash = (); my $file = "scaffold_contig.txt"; open(IN, "<$file") or die "Cannot open file $file\n"; my @lines = <IN>; foreach $1 (@lines){ chomp($1); my %columns = split(">", $1);} close(IN); print "$lines[1];\n" foreach my $N (@Number){ for $R (0..$#lines){ if($lines[$R] =~ /$N/){ print "lines[$R]\n" } } }

    Here is line 38: foreach my $N (@Number){

New Meditations
Refactoring Perl5 with XS++
5 direct replies — Read more / Contribute
by rje
on Apr 25, 2015 at 01:06

    Last time I mused aloud about "refactoring" Perl, I referenced Chromatic's statement/challenge:

    "If I were to implement a language now, I'd write a very minimal core suitable for bootstrapping. ... Think of a handful of ops. Think very low level. (Think something a little higher than the universal Turing machine and the lambda calculus and maybe a little bit more VMmy than a good Forth implementation, and you have it.) If you've come up with something that can replace XS, stop. You're there. Do not continue. That's what you need." (Chromatic, January 2013)

    I know next to nothing about XS, so I started reading perldoc.

    I'm thinking about the problem, so if there is a question, it would be "what NOW?"

    Should I bother with thinking about bytecodes? In what sense could it be a replacement for XS? What does "replace XS" even MEAN? (i.e. perhaps it just means "remove the need to use perl's guts to write extensions, and improve the API").

    Most importantly, am I wasting people's time by asking?

    I'm trying to come up with my own answers, and learn by trying. But wisdom is in knowing that some of you guys have already thought through this. If you can help bring me up to speed, I'd appreciate it.

    UPDATE: I see even within the Lorito page, it was understood that the discussion was to some degree about Perl's API: "This is an issue of API design. If we understand the non-essential capabilities we want to support (e.g. optimization passes, etc), we can design the API so that such capabilities can be exploited but not required. - cotto "

Log In?

What's my password?
Create A New User
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others studying the Monastery: (10)
As of 2015-04-28 20:10 GMT
Find Nodes?
    Voting Booth?

    Who makes your decisions?

    Results (529 votes), past polls