Beefy Boxes and Bandwidth Generously Provided by pair Networks
Don't ask to ask, just ask
 
PerlMonks  

The Monastery Gates

( #131=superdoc: print w/ replies, xml ) Need Help??

Donations gladly accepted

If you're new here please read PerlMonks FAQ
and Create a new user.

New Questions
Ubuntu 14.04.1 & Net::SSLGlue::POP3
1 direct reply — Read more / Contribute
by JimSi
on Apr 27, 2015 at 12:53

    I have a problem compiling or installing Net::SSLGlue / Net::SSLGlue::POP3 package on newest Ubuntu 14.04.1. Never had any problem with it on CentOS and the script works right there. On Ubuntu - if I install from package (apt) - it cannot run the script, and gave me following error:

    $ ./t.perl Subroutine Net::POP3::starttls redefined at /usr/share/perl5/Net/SSLGl +ue/POP3.pm line 13. Subroutine Net::POP3::_STLS redefined at /usr/share/perl5/Net/SSLGlue/ +POP3.pm line 27. cannot find and replace IO::Socket::INET superclass at /usr/share/perl +5/Net/SSLGlue/POP3.pm line 93. Compilation failed in require at ./t.perl line 21. BEGIN failed--compilation aborted at ./t.perl line 21.

    when I am trying to install from CPAN:
    PERL_DL_NONLAZY=1 "/usr/bin/perl" "-MExtUtils::Command::MM" "-MTest::H +arness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/l +ib', 'blib/arch')" t/*.t t/01_load.t .. Subroutine Net::SMTP::starttls redefined at /root/.cpan +/build/Net-SSLGlue-1.053-fRXdxl/blib/lib/Net/SSLGlue/SMTP.pm line 13. Subroutine Net::SMTP::_STARTTLS redefined at /root/.cpan/build/Net-SSL +Glue-1.053-fRXdxl/blib/lib/Net/SSLGlue/SMTP.pm line 30. t/01_load.t .. Failed 1/3 subtests Test Summary Report ------------------- t/01_load.t (Wstat: 0 Tests: 3 Failed: 1) Failed test: 1 Files=1, Tests=3, 1 wallclock secs ( 0.02 usr 0.00 sys + 0.10 cusr + 0.00 csys = 0.12 CPU) Result: FAIL Failed 1/1 test programs. 1/3 subtests failed. make: *** [test_dynamic] Error 255

    The script fails to load "Net::SSLGlue::POP3" module.
    Please help
    Thanks
Efficient Automation
4 direct replies — Read more / Contribute
by jmneedhamco
on Apr 27, 2015 at 11:29

    I am working on a script to automate some process checking and basically want the script to check processes and then if one is not running, start it.

    There are several instances of Apache for example that are running for various groups in our company. So I envision a for/each loop. This loop would check each process in turn and then if the ps command returns FAILURE, then it would launch that start command associated with that process.

    The question here is: Would the best approach be a couple of arrays with the cmds in them? The way to look at each process on this box is the same save the item we are grepping for.

    Help to do this in most efficient way is appreciated.

perl module help
3 direct replies — Read more / Contribute
by janasec
on Apr 26, 2015 at 13:40

    hi I am learning to write automation for running some tests,to begin with I have written the following code.I need to know how I can create a perl module so I can make calls to check if a host is alive

    #!/usr/bin/perl use strict; use warnings; use lib '/home/suse/junk/automation1/emulex'; use Config::Simple; use Net::Ping::External qw(ping); use 5.010; #myconf.cfg is a file with all hosts my $cfg = new Config::Simple('/home/suse/junk/automation1/emulex/mycon +f.cfg'); #accessing values my $host = $cfg->param("host1"); print "checking $host is reachable or not\n"; my $alive = ping(hostname => "$host", count => 5, size => 1024, timeou +t => 3); print "$host is alive!\n" if $alive or die"Could not ping host '$host' + ";
A better way of lookup?
9 direct replies — Read more / Contribute
by BrowserUk
on Apr 26, 2015 at 07:50

    This has been a recurring dilemma down the years.

    Given a contiguous input and a set of break points, find the highest breakpoint lower than the input value and return the associated value.

    sub lookup { my( $v ) = shift; if( $v < 25000 ) return 2500; if( $v < 50000 ) return 5000; if( $v < 150000 ) return 12500; if( $v < 225000 ) return 25000; if( $v < 300000 ) return 37500; if( $v < 600000 ) return 60000; if( $v < 1200000 ) return 120000; if( $v < 3600000 ) return 300000; if( $v < 5400000 ) return 600000; if( $v < 10800000 ) return 900000; if( $v < 21600000 ) return 1800000; if( $v < 43200000 ) return 3600000; if( $v < 64800000 ) return 7200000; if( $v < 129600000 ) return 10800000; if( $v < 216000000 ) return 21600000; if( $v < 432000000 ) return 43200000; if( $v < 864000000 ) return 86400000; if( $v < 1728000000 ) return 172800000; if( $v < 3024000000 ) return 345600000; if( $v < 6048000000 ) return 604800000; if( $v < 12096000000 ) return 1209600000; if( $v < 31557600000 ) return 2629800000; if( $v < 63115200000 ) return 5259600000; if( $v < 78894000000 ) return 7889400000; if( $v < 157788000000 ) return 15778800000; return 31557600000; }

    Simple. Efficient. Not very pretty. Is there a better way?

    • I could stick the values in hash, iterate the keys and return the value:

      but that requires either sorting the keys each time or keeping a sorted array of the keys and duplicating memory.

    • I could put the break points and values in parallel arrays;

      but ... parallel arrays?

    • I could use an array of pairs (AoA):

      But looping over the double indirection isn't particularly efficient.

    Then there's the search method. Most time the set isn't big enough to warrant coding a binary search in place of a linear one. Most times efficiency isn't a particular concern, but in this case, the routine is called as part of a redraw function when rotating stuff on screen, so it can be called many times a second.

    Basically, there are several ways of doing it, but none of them are particularly satisfying, and I'm wondering if anyone has discovered a nice way that I haven't covered?

    (The final twist is that this is destined for JavaScript; so if any JS guys know of a good method that language supports; I'd be happy to hear of it. Perhaps off-line.)


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
    In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked
Net::SSH2 channel returns no output
1 direct reply — Read more / Contribute
by rama133101
on Apr 25, 2015 at 16:08

    I need to send some commands to remote linux box and fetch responses. I am using Net:SSH2 for creating a session channel and send commands using shell.

    The problem I face is that the response I receive is empty. If I add a sleep statement, the output is very well captured. I do not want to add sleep statements as I need to send multiple commands and that would reduce the performance. Please advice what I am missing.

    Here is the code.

    $session = Net::SSH2->new(); $rc = $session->connect($target_ip, $target_port, Timeout=>4000) ; print "\n rc: $rc"; $session->auth_password($username, $passwd) ; $chan = $session->channel(); $chan->shell() ; print $chan $cmd . " \n" ; my @poll = ({handle=>$chan, events=>['in', 'ext', 'channel_closed']}); $session->blocking(0) ; $session->poll(1, \@poll) ; if ($poll[0]->{revents}->{in}) { while (<$chan>) { $resp .= $_ ; } } print "\nresponse : $resp";

    Updates: We found out the issue here.

    The problem is, once we receive the cmd output, the EOF is not reaching and so the channel is not getting closed automatically.

    Question: How will SSH channel know that the cmd it sent has returned the complete output and when should it close its channel?

Wait for individual sub processes [SOLVED]
7 direct replies — Read more / Contribute
by crackerjack.tej
on Apr 25, 2015 at 03:01

    Dear monks,

    I am essentially writing a Perl script that divides a large input file for a text processing tool, so that I can process the files faster. I am working on a CentOS 6 based cluster, where each CPU has 16 cores. My idea is to split the input file into 16 parts, and run 16 instances of the text processing tool, and once all of them are done, I parse the output and merge it into a single file. In addition, the script will continue to process the next input file in a similar way. I have achieved that using fork(), wait() and exec() as follows (Omitting code that is not relevant):

    use strict; use warnings; use POSIX ":sys_wait_h"; #Split input files into parts and store the filenames into array @ +parts ... my %children; foreach my $part (@parts) { my $pid = fork(); die "Cannot fork for $part\n" unless defined $pid; if ($pid == 0) { exec("sh text_tool $part > $part.out") or die "Cannot exec + $part\n"; } print STDERR "Started processing $part with $pid at ".localtim +e."\n"; $children{$pid} = $part; } while(%children) { my $pid = wait(); die "$!\n" if $pid < 1; my $part = delete($children{$pid}); print STDERR "Finished processing $part at ".localtime."\n"; }

    While I got what I wanted, there is a small problem. Due to the nature of the text processing tool, some parts get completed much before others, in no specific order. The difference is in hours, which means that many cores of the CPU are idle for a long time, just waiting for few parts to finish.

    This is where I need help. I want to keep checking which part (or corresponding process) has exited successfully, so that I can start the processing of the same part of the next input file. I need your wisdom on how I can achieve this. I tried searching a lot on various forums, but did not understand correctly how this can be done.

    Thanks.

    ------UPDATE---------

    Using a hash, I can now find out which process is exiting when. But I fail to understand how to use this code in an if block, so that I can start the next process. Can someone help me with that? I have updated the code accordingly.

    ----------------UPDATE 2--------------

    I guess it's working now. Using Parallel::ForkManager, and a hash of arrays that stores the pids of each input file, I am able to track the sub processes of each file separately. By maintaining a count of number of subprocesses exited, I can call the sub for output parsing as soon as the count reaches 16 for an input file. I will come back if I run into any other problem.

    Thanks a lot for all the help :)

    P.S. Is there any flag that I have to set that this thread is answered/solved?

Out of Memory Error : V-Lookup on Large Sized TEXT File
7 direct replies — Read more / Contribute
by TheFarsicle
on Apr 24, 2015 at 09:14
    Hello perlmonks,

    I am newbie to Perl & working on the Perl script to perform an action similar to V-Lookup.

    So,

    As an input I have some large sized text files around 200 MB. These text files are to be searched for all the records present in the another file, say Reference.txt (This file is normally not more than one MB)

    I have written script to find all the lines present in these large sized files based on text (string values) in Reference.txt file. All the found records are then written into a new file per each large file iteration.

    The script works fine for normal size like 30-40 MB but when it goes more than 100 MB or so. It throws out of memory error.

    I have designed these operations as subroutine and calling them.

    The code goes something like this...

    open (FILE, $ReferenceFilePath) or die "Can't open file"; chomp (@REFFILELIST = (<FILE>)); open OUTFILE, ">$OUTPUTFILE" or die $!; foreach my $line (@REFFILELIST) { open (LARGEFILE, $LARGESIZEDFILE) or die "Can't open File"; while (<LARGEFILE>) { my $Result = index($_, $line); if ($Result > 0) { open(my $FDH, ">>$OUTPUTFILE"); print $FDH $_; } } close(LARGEFILE); } close(OUTFILE); close(FILE);

    Can you please guide me on where I am going wrong and what would be the best way to address this issue?

    Thanks in advance.

    FR

DESTROY and AUTOLOAD in 5.20.1
4 direct replies — Read more / Contribute
by szabgab
on Apr 24, 2015 at 05:36
    Given this script:
    use strict; use warnings; use 5.010; use Greeting; say 'Hi'; { my $g = Greeting->new; } say 'Bye';
    and this module:
    package Greeting; use strict; use warnings; use 5.010; use Data::Dumper; sub new { my ($class) = @_; return bless {}, $class; } sub AUTOLOAD { our $AUTOLOAD; say $AUTOLOAD; } DESTROY { say 'destroy'; } 1;
    I can see the word "destroy" printed as I would expect. However, if I remove the DESTROY from the module I don't see AUTOLOAD being called instead of the missing DESTROY. I only checked it with 5.20.1 but I wonder what am I missing here?

    Update

    Reported with perlbug as RT #124387
Getting an unknown error
5 direct replies — Read more / Contribute
by andybshaker
on Apr 23, 2015 at 10:02

    Basically, different arrays have different pieces of information in them and I have to go from one to another to another from that. In this case, I have to go from each element in @Genes and extract its corresponding element from a long file which I read in as @lines. I keep getting a strange error that reads, syntax error at findscaffold.pl line 38, near "$N (" Does anyone know what this is? Here is the code. </p?

    my @Genes = qw(A B C D) my @ptt = ("19384..003059 0 - - A","203581..39502 0 + - B) my @contig = (); my @Coordinates; my @Number; my $R; foreach my $G (@Genes){ for my $x (0..$#ptt){ if($ptt[$x] =~ /$G/){ push(@Coordinates,"$ptt[$x]"); print "$ptt[$x]\n";} } } foreach my $C (@Coordinates){ push (@Number, split(" ", $C));} my %hash = (); my $file = "scaffold_contig.txt"; open(IN, "<$file") or die "Cannot open file $file\n"; my @lines = <IN>; foreach $1 (@lines){ chomp($1); my %columns = split(">", $1);} close(IN); print "$lines[1];\n" foreach my $N (@Number){ for $R (0..$#lines){ if($lines[$R] =~ /$N/){ print "lines[$R]\n" } } }

    Here is line 38: foreach my $N (@Number){

Retrieving content from couchdb using CouchDB::Client
3 direct replies — Read more / Contribute
by shivam99aa
on Apr 23, 2015 at 09:36

    I am able to create new documents using CouchDB::Client as well as able to verify if any doc is present. What i am not able to do is to retrieve the contents of any doc. The reason is i am not able to get the correct syntax which is to be used for it. I am not a perl genius so taking a look at the source code did not help me either.

    use warnings; use CouchDB::Client; my $c = CouchDB::Client->new(uri => 'http://127.0.0.1:5984/'); my $db = $c->newDB('test'); my $doc = $db->newDoc('12345', undef, {'foo'=>'bar'})->create; if ($db->docExists('12345')){ print "hello\n"; } #my $doc=CouchDB::Client::Doc->new($db); print $doc->retrieve('12345');

    I am able to create document but then i need to comment that line on next run as this will give storage error. But after commenting i have no way to retrieve the doc as i have no object remaining. But this should not be the constraint as there should be a way to retrieve doc using the db object by giving id to it.

Regex for files
5 direct replies — Read more / Contribute
by bmcquill
on Apr 22, 2015 at 22:01
    I'm trying to get better at regex and I'm starting with Perl. I want to be able to go through a directory and find all the files that begin with messages and MAY have a "." and a digit behind it, but it should not match something that has say .txt, .pl, etc. Any assistance is greatly appreciated. I want to find all the files that are messages, messages., messages.1, etc. but NOT messages.txt or messages.pl. Does that help?
Best practice for sending results to a user via email
4 direct replies — Read more / Contribute
by Anonymous Monk
on Apr 22, 2015 at 16:30
    Dear Monks,
    I hereby ask your wisdom on the following problem:
    I have set up a simple web-server in PHP, with a submission form (textarea). When the user submits the form, the contents are put into a file and a perl script is being executed on the file. The output of the script is written in a text file at the end, also an image is being produced.
    My question has two parts:
    1) Because this script takes some time to execute, I think it is not a good practice to just let it run on the web-server, since there is great possibility it will hang and then it will not output anything. So I thought the best is to just get the input from the user and then just send an email to him saying "your work is completed" with a link that will be an HTML page with the results.
    Does this sound reasonable practice to you?
    2) Can you give me some hints as to how such a thing is accomplished? I mean, what steps should I follow and maybe point me to some examples of sending emails to user that can direct the user to an HTML page with the final results?
    Thank you in advance!
New Meditations
Refactoring Perl5 with XS++
5 direct replies — Read more / Contribute
by rje
on Apr 25, 2015 at 01:06

    Last time I mused aloud about "refactoring" Perl, I referenced Chromatic's statement/challenge:

    "If I were to implement a language now, I'd write a very minimal core suitable for bootstrapping. ... Think of a handful of ops. Think very low level. (Think something a little higher than the universal Turing machine and the lambda calculus and maybe a little bit more VMmy than a good Forth implementation, and you have it.) If you've come up with something that can replace XS, stop. You're there. Do not continue. That's what you need." (Chromatic, January 2013)

    I know next to nothing about XS, so I started reading perldoc.

    I'm thinking about the problem, so if there is a question, it would be "what NOW?"

    Should I bother with thinking about bytecodes? In what sense could it be a replacement for XS? What does "replace XS" even MEAN? (i.e. perhaps it just means "remove the need to use perl's guts to write extensions, and improve the API").

    Most importantly, am I wasting people's time by asking?

    I'm trying to come up with my own answers, and learn by trying. But wisdom is in knowing that some of you guys have already thought through this. If you can help bring me up to speed, I'd appreciate it.

    UPDATE: I see even within the Lorito page, it was understood that the discussion was to some degree about Perl's API: "This is an issue of API design. If we understand the non-essential capabilities we want to support (e.g. optimization passes, etc), we can design the API so that such capabilities can be exploited but not required. - cotto "

Log In?
Username:
Password:

What's my password?
Create A New User
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others cooling their heels in the Monastery: (5)
As of 2015-04-28 06:51 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    Who makes your decisions?







    Results (516 votes), past polls