Beefy Boxes and Bandwidth Generously Provided by pair Networks
Clear questions and runnable code
get the best and fastest answer

The Monastery Gates

( #131=superdoc: print w/ replies, xml ) Need Help??

Donations gladly accepted

If you're new here please read PerlMonks FAQ
and Create a new user.

New Questions
DBIx::Class Looping thru resultset
1 direct reply — Read more / Contribute
by phildeman
on Apr 29, 2016 at 16:04

    Hi All,

    Can variables be used in place of column names when looping through DBIx::Class array of objects.
    For example:

    my @db_colnames = ('lastname','firstname','middlename'); my @rows = $schema->resultset( 'myTable' )->search({}, {order_by => [qw +/ lastname firstname middlename /] } ); foreach my $row (@rows) { foreach my $col (@db_colnames) { print $row->$col ."\t"; } print "\n"; }

    No value printed. However, when I used the actual column name to retrieve the data, it worked fine. For example:

    . . print $row->lastname . "\t" . $row->firstname . "\t" . $row->middlenam +e . "\t"; print "\n"; . .

    Thanks for your help.

Parse multiline logfile
2 direct replies — Read more / Contribute
by chris2013
on Apr 29, 2016 at 15:04
    Dear All,

    I'd like to parse a Charon (StrongSwan) logfile.

    Somewhere it logs a failed login. If this is found, I'd like to lookup the username and IP address, that are mentioned some lines above.

    What's the best approach? Is there any framework that can do that? Where I only have to fill-in the regex?

    Should I read the whole logfile to an array and then go back? Or is there something like grep? I've tried to use a shell script and let perl one-liners do the regex stuff. Unfortunately I couldn't escape the single quotes in front and after the e-mail address. They have to be quoted because of the shell and the regex. Tried to write them to an evironment variable but that didn't work. Probably, there are completely different and better solutions.

    export MYP="identity '([A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,})'" EMAIL=`grep -B15 "$DATE" $LOGFILE | grep "$CONID" | grep identity | pe +rl -nle 'print "$1" if /$ENV{MYP}/'`


    Apr 29 10:01:10 charon: 07[NET] <con1|33> received pac +ket: from[55842] to[4500] (80 bytes) Apr 29 10:01:10 charon: 14[IKE] <con1|33> received EAP + identity '' Apr 29 10:01:11 charon: 14[IKE] <con1|33> EAP-MS-CHAPv +2 verification failed, retry (1)

    - Chris

Filtering array of strings and numbers
3 direct replies — Read more / Contribute
by nysus
on Apr 29, 2016 at 11:28

    I'm stuck on what I think should be a simple problem. I want to filter out elements from an array. I don't know if they array contains strings or numbers or both. And I don't know if the value I want to filter out is a string or number:

    my $string_or_number; my @filtered = grep { $string_or_number ne $_ } @strings_or_numbers;

    What's the best way to accomplishing this without throwing an error if the wrong equality operator is used?

    $PM = "Perl Monk's";
    $MCF = "Most Clueless Friar Abbot Bishop Pontiff Deacon Curate";
    $nysus = $PM . ' ' . $MCF;
    Click here if you love Perl Monks

Detached forking in a CGI script
4 direct replies — Read more / Contribute
by xylose
on Apr 29, 2016 at 11:06

    I'm having a problem satisfying a number of constraints in a CGI script I'm writing.

    I need the script to fork a child process and to get hold of the pid of this child, but to then not make the parent (the CGI process) wait for the child, and allow it to exit immediately.

    One extra complication is that in the child I exec a new program (an R script), but capture STDOUT and STDERR into log files, so I can't close those filehandles before doing the exec.

    I've tried a number of different approaches, but anything which gives me the pid and starts the child process correctly always makes the parent hang until the child is done.

    My current (not working) best guess is:

    $SIG{CHLD} = 'IGNORE'; my $pid = fork(); if ($pid) { # We're the child and we need to start the analysis # First we'll write out pid into a file in the run # folder so that the results tracker can tell if we've # died. open (PID,'>','pid.txt') or die "Failed to write to pid file : $!" +; print PID $pid; close PID or die "Failed to write to pid file: $!"; exec("Rscript analysis.r > log.txt 2>errors.txt"); exit(0); } print $q->redirect("script.cgi?job_id=$job_id");

    In this version the child runs, but the parent doesn't exit until the child has completed.

    Any ideas how I can have my cake and eat it?


Another concurrent HTTP requests script - code review request
No replies — Read more | Post response
by alanek
on Apr 29, 2016 at 06:02
    Hi PerlMonks! After years of using Parallel::ForkManager to speed up making thousands of HTTP request, I was adviced by Ikegami at to give Net::Curl::Multi a try. And I have to say its awesome! Now, becouse its my first time using it, and becouse final version of the code will have to deal with external, 3rd party API, I would really want to be sure its working fine and doesnt have any bugs. Below You will find my test script based on Ikegami's advice. It works, I dont have any problems with it, but maybe some more experienced Perl users would notice something that I didnt, or tell me if I shouldn't do something like I did, etc. My main concern is the LINE loop and way that I add URL to N::C::M queue, fill the queue with more URLs, wait for one of them to finish, and go to the next URL and add it. Some notes:
    • Im using external txt file with URLs becouse there might be A LOT of them (hundret of thousands even) so I dont want to load everything to array.
    • As I don't fully understand how Net::Curl::Multi's concurrency works, please tell me whether I should expect any problems with putting MySQL UPDATE command (via DBI) inside RESPONSE loop (besides higher server load obviously - I expect final script to run with at least 50 concurrent N::C::M workers)
    #!/usr/bin/perl use Net::Curl::Easy qw( :constants ); use Net::Curl::Multi qw( ); sub make_request { my ( $url ) = @_; my $easy = Net::Curl::Easy->new(); $easy->{url} = $url; $easy->setopt( CURLOPT_URL, $url ); $easy->setopt( CURLOPT_HEADERDATA, \$easy->{head} ); $easy->setopt( CURLOPT_FILE, \$easy->{body} ); return $easy; } my $maxWorkers = 10; my $multi = Net::Curl::Multi->new(); my $workers = 0; my $i = 1; open my $fh, "<", "urls.txt"; LINE: while ( my $url = <$fh> ) { chomp( $url ); $url .= "?$i"; print "($i) $url\n"; my $easy = make_request( $url ); $multi->add_handle( $easy ); $workers++; my $running = 0; do { my ($r, $w, $e) = $multi->fdset(); my $timeout = $multi->timeout(); select $r, $w, $e, $timeout / 1000 if $timeout > 0; $running = $multi->perform(); RESPONSE: while ( my ( $msg, $easy, $result ) = $multi->info_r +ead() ) { $multi->remove_handle( $easy ); $workers--; printf( "%s getting %s\n", $easy->getinfo( CURLINFO_RESPON +SE_CODE ), $easy->{url} ); } # dont max CPU while waiting select( undef, undef, undef, 0.01 ); } while ( $workers == $maxWorkers || ( eof && $running ) ); $i++; } close $fh;
Any module for saving data as perl data?
3 direct replies — Read more / Contribute
by PerlBroker
on Apr 29, 2016 at 03:25
    Is there any module or way, to simply save data as Perl, for example if I have $a = "hello", that I can save it in exact that form in a file: $a = "hello"; including the same for hashes and arrays?
Suppress 'Can't chdir to' warnings for File::Find
3 direct replies — Read more / Contribute
by mabossert
on Apr 28, 2016 at 12:03

    I am writing some prototype code for a larger application that needs to search all directories from a particular starting point and return all directories that contain certain types of files. The function works fine excepts for producing warnings related to not being able to chdir to directories for which the user who runs the application does not have sufficient permissions to read.

    Perhaps I am just being pickier than I should, but the application needs to traverse directories that belong to the user and also ones that do not belong to the user, but for which the user has sufficient permissions to read. When I run the test code, everything works as expected, but produces warnings. They are expected, but I would like to suppress them or "tell" File::Find to skip any directories that the user does not have permissions to access. On the other hand, maybe there is a way to tell File::Find to first test if it can read the directory and then move on if it cannot?

    Here is my current code. Any suggestions would be greatly appreciated.

    #!/usr/bin/env perl use strict; use warnings; use 5.016; use Carp qw(cluck carp croak); use JSON qw( encode_json ); use File::Find; use Data::Dumper; my %seen; my @files; find ({ wanted => \&wanted }, '/mnt/lustre'); say Dumper(\@files); sub wanted { if($File::Find::name =~ /\.nt$|^dbQuads$|^$|^string_table_ +chars.index$|^string_table_chars$/ && !exists $seen{$File::Find::dir} +) { my $user = (getpwuid ((stat $File::Find::dir)[4]))[0]; my $name = $1 if $File::Find::dir =~ /\/([^\/]+)$/; #=+ Would like to know how big the directory contents are my $raw_size = 0; my $db_size = 0; my $built = 0; find(sub { if(-f $_ && $_ =~ /\.nt$/) { $raw_size += -s $_; } elsif(-f $_ && $_ =~ /^dbQuads$|^string_table_chars.index$|^stri +ng_table_chars$/) { $db_size += -s $_; $built = 1; } },$File::Find::dir); my %temp = ( owner => $user, raw_size => $raw_size, db_size => $db_size, name => $name, path => $File::Find::dir, built => $built ); push @files, \%temp; $seen{$File::Find::dir} = 1; } }
Error Message - PL_perl_destruct_level at /usr/lib64/perl5/
3 direct replies — Read more / Contribute
by NorCal12
on Apr 28, 2016 at 00:45

    I am new to this forum and certainly not a Perl expert.

    I have a website that is an auction for the commercial fishing industry. It has been up and running since 2011. It is currently located on a shared hosting site and I am in the process of moving it to a VPS site on another company's server. I have move all the files and database to the new location and I have been testing everything before I have the DNS pointed to the new location. For the most part everything looks good. I have an archive section, where a user can look at tables of past sales. These are generated from data stored in a mysql database. The display of past sales works fine.

    However, when I test a new sale I get an error when that sale is being inserted into the database. For speed in testing I have been using a "buy-it-now" feature rather than have to wait for an auction to end.

    The code begins as below:

    #!/usr/bin/perlml BEGIN { my $base_module_dir = (-d '/home/jeffer36/perl' ? '/home/jeffer36/per +l' : ( getpwuid($>) )[7] . '/perl/'); unshift @INC, map { $base_module_dir . $_ } @INC; unshift @INC, '/home/jeffer36/perl5/lib/perl5','/home/jeffer36/perl5/ +lib/perl5/x86_64-linux','/home/jeffer36/perl5/lib/perl5/x86_64-linux/ +Bundle'; } use POSIX qw(strftime); use File::Copy; use strict; use CGI; use CGI::Session; use CGI::Carp qw(fatalsToBrowser); use File::CounterFile; use Data::Dumper; use DBI; #use DBD::mysql;

    When the sale is being inserted into the database this is the error message:

    install_driver(mysql) failed: Can't load '/home/jeffer36/perl5/lib/perl5/x86_64-linux/auto/DBD/mysql/' for module DBD::mysql: /home/jeffer36/perl5/lib/perl5/x86_64-linux/auto/DBD/mysql/ undefined symbol: PL_perl_destruct_level at /usr/lib64/perl5/ line 200, <BUYERFILE> line 88. at (eval 16) line 3 Compilation failed in require at (eval 16) line 3, <BUYERFILE> line 88. Perhaps a required shared library or dll isn't installed where expected at ../auction/ line 284</P.

    Line 284 in is:

    my $dbh = DBI->connect("DBI:mysql:$db:$server", $userid, $passwd);

    I have search the web for information on this error message and have come up empty. Does anyone have suggestions on how to correct the problem?

The process cannot access the file because it is being used by another process?[Problem disappeared!]
6 direct replies — Read more / Contribute
by BrowserUk
on Apr 27, 2016 at 17:44


    Before I posted, I tried a bunch of things and ran it at least a dozen times getting the exact same failure. I the cut the script of the top of the data, renamed the file .dat and stuck the script into a new file of the old name. I changed <DATA> to <> and supplied the data file on the command line and it worked first time.

    I've just recreated the all in one script; and now it runs perfectly. I have no explanation for the error or the cure; but I suspect 1nickt called it.

    Thanks for your help guys.

    I'm getting this error when I try to run the following code:

    C:\Motor> The process cannot access the file because it is being used by another + process.

    Another process is accessing the DATA pseudo-handle?

    The data is large 2.4 million, but I'm sure I've processed much larger datasets from <DATA> before; and I've commented out the storing of the data to the array, so it isn't running out of memory. Cluebats anyone?

    #! perl -slw use strict; my @data; my $i = 0; #$data[ $i++ ] = [ split ' ' ] ++$i while <DATA>; print $i; __DATA__ -69.282032302755084 40.000000000000014 0 -1 -69.123493781255831 39.908467741935496 -1.4443382565142906e-006 -1 -68.748013135538145 40.911009397420145 0 -1 -68.964955259756593 39.816935483870985 -2.990721348858345e-006 -1 -68.370049396495517 40.298149116372635 -6.3944096502804299e-006 -1 -68.202015544462682 41.814890597403163 0 -1 ... 2.4 million lines omitted.

    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
    In the absence of evidence, opinion is indistinguishable from prejudice.
Read Directory and getlines in .csv
3 direct replies — Read more / Contribute
by Anonymous Monk
on Apr 27, 2016 at 10:57

    Hello. I am new to Perl and programming. I have written a perl program to get the names of .csv files in a different directory which were uploaded to the server today only, by modified date. These files are 13+ MB so I don't want to copy and move them to my new directory. I also wrote another perl program which will read a file and get only the lines with what I want and place those lines in a .csv with the criteria I need for my final report. My question is: How do I integrate these two programs into one program without moving these large files from the directory they are housed in. Is this possible?

    #This is the program to get the .csv file names from the directory: #!/usr/bin/perl use strict; use warnings; use File::stat; use Text::CSV_XS; use IO::File; my $dirname = "Daily_QoS_CPD_link"; opendir(DIR, $dirname) or die "Not able to open $dirname $!"; # Create an array, Open the directory, get only .csv's modified in the + last 24 hours my @dir = sort { -M "$dirname/$a" <=> -M "$dirname/$b"} grep /\.csv/, $dirname ne '.' && $dirname ne '..', readdir + DIR; rewinddir(DIR); # create a list of csv's for today into a text file. my $One_day = 86400; foreach my $list (@dir){ my $diff = time()-stat("$dirname/$list")->mtime; if( $One_day > $diff){ open FILE, ">>CPD_Files.txt" or die "Not able to open f +ile $!"; print FILE "$list\n"; } } closedir DIR; close FILE; # This is the code for getting the lines from the .csv's that I need #!/usr/bin/perl use strict; use warnings; use Text::CSV_XS; my $Finput = "cpd_link_ABC_cpddrops_50_300300000.csv"; my $Foutput = "data0426-2.csv"; open my $FH, "<", $Finput; open my $out, ">", $Foutput; my $csv = Text::CSV_XS->new({binary => 1, eol => $/ }); while(my $row = $csv->getline($FH)) { my @fields = @$row; if ($fields[2] eq "DROPPED-10" || $fields[2] eq "CALL_START" | +| $fields[2] eq "CALL_END") { $csv->print($out, $row); } } close $FH; if (not $csv->eof){ $csv->error_diag(); }
Comparing Lines within a Word List
9 direct replies — Read more / Contribute
by dominick_t
on Apr 26, 2016 at 15:54
    Hello all-- New to Perl, new to this forum. Many thanks in advance for reading and offering help. I have a background in mathematics and have done a bit of programming, mostly for specific tasks that lead me to learn just enough of a language to achieve them. So I wouldn't call myself thoroughly conversant in any language. I am, however, interested in learning Perl more deeply, as I have some long-term projects that will require managing and searching through word lists in creative ways. I've been reading the O'Reilly book Learning Perl, but I have a specific problem that I need to solve somewhat urgently, and I'm afraid I haven't learned enough Perl yet to even attempt some code that could do it. So, here's the problem: I have a long list of text strings saved in a .txt file. What I am interested in are pairs of words that are exactly the same, except in one position . . . in particular, where one word has, say, an R, the other word has, say, an S. So if the word list was a standard dictionary and I ran the code on it, the output would include the pairs RAT and SAT, also RATE and SATE, also BARE and BASE, also BARR and BARS. This strikes me as something that should be possible using regular expressions in a Perl script. Am I right about that? If so, and if it's pretty easy for an expert to write some code that will do this, I would be much obliged, not just because I need a speedy solution to this question due to a deadline, but also because it will give me a great piece of example code to help me in getting my hands dirty learning Perl. All best-- Dominick
Fast provider feeding slow consumer
3 direct replies — Read more / Contribute
by leostereo
on Apr 24, 2016 at 12:11

    Hi friends, a week ago I had to deal with this situation where a very fast consumer is feeding a slow consumer.
    I posted my solution using forks wich was working fine running on a test server but when I put it to run on a production server it crashed.
    Testing server is a very old box running centos 6 and production server is a Virtual machine on vmware plataform running Oracle RH ver 7.
    Instead of figure out why it was running fine on one machine and crashing in the other I decided to go for a parallel solution.
    Some users seggested my to read about parallel preforking so I came with these piped scripts:
    ./ | ./

    I want to say that Im running both scripts using pipes for two reasons:
    First: I can not merge them ... I don't know how to do it, so some help on this would be great.
    Second: I realized that this way I can use tee command and analize both outputs.
    I would like to share both scripts so you can help me to improve them or maybe to suggest other alternatives to do this task. Thanks #!/usr/bin/perl use IO::Socket::INET::Daemon; use Proc::Daemon; use Proc::PID::File; use IO::Handle; STDOUT->autoflush(1); my $host = new IO::Socket::INET::Daemon( host => '', port => 7777, timeout => 20, callback => { data => \&data, }, ); $host->run; sub data { my ($io, $host) = @_; my $line = $io->getline; chomp($line); return 0 unless $line; print "$line\n"; return !0; } +: #!/usr/bin/perl use DBI; use Parallel::ForkManager; my $pm = Parallel::ForkManager->new(10); $forks =1; while(<>){ $pm->start() and next; # Parent nexts ### my ($type, $ip, $mac, $bsid, $datecode) = split(',', $_); $cpe=$ip; $mac=~s/-//g; $community='public'; $snmp_rssi = '.'; $output=qx(snmpwalk -v2c -t1 -c $community $cpe $snmp_rssi + 2>&1); #this is the task that delays the consumer process. if( $output eq "Timeout: No Response from $ip" ) { $rssi=0; $error='SNMP not responding. Upgrade firmware'; } else { @result=split(/:/,$output); $rssi=$result[3]; $rssi=~s/ //g; $rssi=~s/\n//g; if($rssi < -100) { $rssi=$rssi/100; } $rssi=int($rssi); } $dbh = DBI->connect("DBI:mysql:database=cpe_info;host=;por +t=3306","account_process","neting.!"); $query = "INSERT INTO cpe_info(mac,ip,bsid,rssi) VALUES". "('$mac','$ip','$bsid','$rssi')". "ON DUPLICATE KEY UPDATE ip='$ip',bsid='$bsid',rssi='$rssi'"; $sth = $dbh->prepare($query); $sth->execute(); $dbh->disconnect(); print "we are on fork number $forks\n"; $forks++; ### $pm->finish(); }

    Last comment: I was also trying to print the fork number a the end of the consumer script. I did not get the expected output since all the lines prints "1" but I accidentally realized that it was the correct out since it is running on a different process. So other goal for me is to learn how can I get the forks number. Regards.

New Meditations
Reversed .= operator
3 direct replies — Read more / Contribute
by 1nickt
on Apr 29, 2016 at 12:40

    Hi all,

    I'm sure we've all wished for a concatenation operator that would prepend a string to a string in the same way the .= operator appends.

    So why isn't there one?

    It's silly that you can write:

    $foo .= 'bar';
    But not:
    $baz =. 'qux';
    and instead have to do:
    $baz = 'qux' . $baz;

    Today I got to wondering if I had missed that such an operator had been introduced in some recent Perl version so I ran the code, and to my surprise Perl said:

    Reversed .= operator at -e line 5. syntax error at -e line 5, near "=."
    Now, if Perl knows that this particular syntax error is a "reversed .= operator", and not, say, "some new operator I didn't know about" - i.e. the syntax is not in use for anything else - then why isn't it implemented?

    Can any guts gurus shed any light?

    The way forward always starts with a minimal test.
Regular Expreso 2
2 direct replies — Read more / Contribute
by choroba
on Apr 26, 2016 at 16:24
    As you might have noticed, I like programming puzzles and brain teasers. But I hadn't participated in a real public contest... until today. I registered to Regular Expreso 2 on HackerRank. The participants had 24 hours to solve 8 tasks, Perl was among the supported languages. The contest was regular expression-centered, your code had to always end the same:
    $Test_String = <STDIN> ; if($Test_String =~ /$Regex_Pattern/){ print "true"; } else { print "false"; }

    The top 10 contestants (most points + shortest time) won a T-shirt. Once I realised there were more then 10 people with the full score, I knew I wasn't getting one, but I still wanted to get the full score.

    But I had no idea how to solve one of the tasks: the input was a long string of zeroes and ones. Your regex had to recognise whether the string encoded two binary numbers in the following way: when you reversed the string and extracted the odd digits, you got a number three times greater than the one built from the remaining digits. For example,

    1110110001 => 00111, 10101 7 21 3 * 7 = 21, accept!

    I wrote a short script to generate some examples, but I wasn't able to find the pattern. Moreover, the regex couldn't have more than 40 characters!

    Then I remembered Perl has a way to run code in a regex: the (?{...}) pattern. I read the relevant perlre section several times and tried something like the following:

    use bigint; sub check { my ($bin, $three) = ('0b') x 2; my $s = $_; while ($s) { $three .= chop $s; $bin .= chop $s; } return oct $three == 3 * oct $bin } $Regex_Pattern = qr/(?{ check() })/;

    The problem here is that (?{...}) always matches. Fortunately, you can use the code pattern as the condition in


    As the yes-pattern, I used ^ which always matches, and (*FAIL) for the no-pattern:

    $Regex_Pattern = qr/^(?(?{ check() }) ^ | (*FAIL) )/x;

    The qr adds some characters, so to be sure I don't overflow, I renamed the subroutine to c and golfed the solution to


    I got the full score! Would you call such a solution cheating? On one hand, I see that's not what the organisers wanted me to do, on the other hand, that's what Perl regular expressions give you. In fact, with the "start or fail" pattern, I can solve almost any problem with a regular expression!

    ($q=q:Sq=~/;[c](.)(.)/;chr(-||-|5+lengthSq)`"S|oS2"`map{chr |+ord }map{substrSq`S_+|`|}3E|-|`7**2-3:)=~y+S|`+$1,++print+eval$q,q,a,
Good practice: A case for qr//
1 direct reply — Read more / Contribute
by LanX
on Apr 25, 2016 at 13:28
    Just wanted to share why using qr// to store regexes is usually a better idea than using strings...

    Today I was told to debug why a night batch failed to complete in the last weeks ...

    As it turned out filenames where checked with a list of hardcoded regexes and one of them had a typo. Instead of filter => ".*pl|.*txt" it had ".*pl|*.txt" which was hard to spot among many other regexes and caused a runtime error.

    Now using qr would have caused filter => qr/.*pl|*.txt/ to fail immediately at compile time.

    And since my colleagues use Komodo which runs perl -c at background (aka flymake-mode in emacs) this would have meant noticing the typo instantly while editing.


    Cheers Rolf
    (addicted to the Perl Programming Language and ☆☆☆☆ :)
    Je suis Charlie!

    PS: not wanna talk about the other flaws, like why extensions are checked with handcrafted regexes or why exitcodes from batches weren't checked...

New Cool Uses for Perl
Saving some seconds.
No replies — Read more | Post response
by BrowserUk
on Apr 26, 2016 at 15:14

    After posting my solution to 1161491 I had some 'free time' so I was playing.

    My REPL which (can) time chunks of code for me automatically, produced some depressing numbers:

    C:\test>p1 [0]{0} Perl> use Algorithm::Combinatorics qw[ permutations ];; [0]{0.00943684577941895} Perl> $iter = permutations( [ reverse 1 .. 9 +] );; [0]{0.000318050384521484} Perl> printf "\r%s\t", join '', @$_ while de +fined( $_ = $iter->next );; 123456789 [0]{22.5874218940735} Perl>

    22.5 seconds to generate 9! = 362880 permutations seemed longer than I would have expected; so then I wondered how much of that was down to the generation and how much the formatting and printing:

    [0]{0} Perl>@d = permutations( [ reverse 1 .. 9 ] );; [0]{2.31235218048096} Perl> [0]{0} Perl> printf "\r%s\t", join '', @$_ for @d;; 123456789 [0]{18.9919490814209} Perl>

    So less than 2.5 seconds for the generation and almost 19 for the formatting and printing. (Leaving 1 second 'lost in the mix'.)

    Of course, that one line for printing is doing rather a lot. Unpacking the contents of the anon arrays to a list; joining the list of digits into a string; and then interpolating that into another string before writing it out. So then I wondered about the cost of each of those elements of the task.

    Looking at the code I saw that I could avoid 300,000 calls to each of join and printf by interpolating the lists from the array references directly into a string; provided I set $" appropriately:

    [0]{0} Perl> $"=''; $_ = "@$_" for @d;; [0]{1.93835282325745} Perl>

    That was a nice saving, so then I thought about writing the output. Rather than use a loop: print for @d; which means calling print 300,000 times -- with all the calls into the kernel that involves -- why not join those 300,000 strings into a single string (one call to join) and the output it with a single call to print:

    [0]{} Perl> $d = join "\r", @d;; [0]{0.0442740917205811} Perl> print $d;; 123456789 [0]{4.72821307182312} Perl>

    Summing the individual parts came out to ~10 seconds rather than the original 22.5. So let's put it all together and verify it:

    [0]{0} Perl> $"=''; @d = permutations( [ reverse 1 .. 9 ] ); $_ = "@$_ +" for @d; $d = join "\r", @d; print $d;; 123456789 [0]{9.26112604141235} Perl>

    Sure enough. Under 10 seconds; over 50% saved. Nice.

    Can we go further?:

    [0]{0} Perl> $"=''; print join "\r", map "@$_", permutations( [ revers +e 1 .. 9 ] );; 123456789 [0]{10.0599029064178} Perl> [0]{0} Perl> $"=''; print join "\r", map "@$_", permutations( [ revers +e 1 .. 9 ] );; 123456789 [0]{10.086268901825} Perl>

    And the answer is no. Sometimes the elimination of intermediate variables -- especially intermediate arrays when the alternative is several long lists -- backfires.

    Still. Another 5 minutes of 'idle time' waiting for another long simulation run occupied with an fun exercise, the lessons of which just might stay in my consciousness long enough to become influencial in the code I write in future.

    A few otherwise idle minutes spent now, saving a few seconds on something that doesn't really benefit from that saving; that just might save me hours or days if the lessons happen to be applicable to my next project; or the one after that.

    (If only I could apply those same level of savings to the simulation software I using -- open source, but I cannot compile it locally -- as perhaps then I would be lookng at a 20 hour wait for its completion rather than the 40+ I have in prospect :().

    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
    In the absence of evidence, opinion is indistinguishable from prejudice.
Log In?

What's my password?
Create A New User
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others exploiting the Monastery: (5)
As of 2016-04-30 12:34 GMT
Find Nodes?
    Voting Booth?
    :nehw tseb si esrever ni gnitirW

    Results (441 votes). Check out past polls.