Beefy Boxes and Bandwidth Generously Provided by pair Networks
We don't bite newbies here... much
 
PerlMonks  

The Monastery Gates

( #131=superdoc: print w/ replies, xml ) Need Help??

Donations gladly accepted

If you're new here please read PerlMonks FAQ
and Create a new user.

New Questions
Suppress 'Can't chdir to' warnings for File::Find
2 direct replies — Read more / Contribute
by mabossert
on Apr 28, 2016 at 12:03

    I am writing some prototype code for a larger application that needs to search all directories from a particular starting point and return all directories that contain certain types of files. The function works fine excepts for producing warnings related to not being able to chdir to directories for which the user who runs the application does not have sufficient permissions to read.

    Perhaps I am just being pickier than I should, but the application needs to traverse directories that belong to the user and also ones that do not belong to the user, but for which the user has sufficient permissions to read. When I run the test code, everything works as expected, but produces warnings. They are expected, but I would like to suppress them or "tell" File::Find to skip any directories that the user does not have permissions to access. On the other hand, maybe there is a way to tell File::Find to first test if it can read the directory and then move on if it cannot?

    Here is my current code. Any suggestions would be greatly appreciated.

    #!/usr/bin/env perl use strict; use warnings; use 5.016; use Carp qw(cluck carp croak); use JSON qw( encode_json ); use File::Find; use Data::Dumper; my %seen; my @files; find ({ wanted => \&wanted }, '/mnt/lustre'); say Dumper(\@files); sub wanted { if($File::Find::name =~ /\.nt$|^dbQuads$|^graph.info$|^string_table_ +chars.index$|^string_table_chars$/ && !exists $seen{$File::Find::dir} +) { my $user = (getpwuid ((stat $File::Find::dir)[4]))[0]; my $name = $1 if $File::Find::dir =~ /\/([^\/]+)$/; #=+ Would like to know how big the directory contents are my $raw_size = 0; my $db_size = 0; my $built = 0; find(sub { if(-f $_ && $_ =~ /\.nt$/) { $raw_size += -s $_; } elsif(-f $_ && $_ =~ /^dbQuads$|^string_table_chars.index$|^stri +ng_table_chars$/) { $db_size += -s $_; $built = 1; } },$File::Find::dir); my %temp = ( owner => $user, raw_size => $raw_size, db_size => $db_size, name => $name, path => $File::Find::dir, built => $built ); push @files, \%temp; $seen{$File::Find::dir} = 1; } }
Error Message - PL_perl_destruct_level at /usr/lib64/perl5/DynaLoader.pm
3 direct replies — Read more / Contribute
by NorCal12
on Apr 28, 2016 at 00:45

    I am new to this forum and certainly not a Perl expert.

    I have a website that is an auction for the commercial fishing industry. It has been up and running since 2011. It is currently located on a shared hosting site and I am in the process of moving it to a VPS site on another company's server. I have move all the files and database to the new location and I have been testing everything before I have the DNS pointed to the new location. For the most part everything looks good. I have an archive section, where a user can look at tables of past sales. These are generated from data stored in a mysql database. The display of past sales works fine.

    However, when I test a new sale I get an error when that sale is being inserted into the database. For speed in testing I have been using a "buy-it-now" feature rather than have to wait for an auction to end.

    The code begins as below:

    #!/usr/bin/perlml BEGIN { my $base_module_dir = (-d '/home/jeffer36/perl' ? '/home/jeffer36/per +l' : ( getpwuid($>) )[7] . '/perl/'); unshift @INC, map { $base_module_dir . $_ } @INC; unshift @INC, '/home/jeffer36/perl5/lib/perl5','/home/jeffer36/perl5/ +lib/perl5/x86_64-linux','/home/jeffer36/perl5/lib/perl5/x86_64-linux/ +Bundle'; } use POSIX qw(strftime); use File::Copy; use strict; use CGI; use CGI::Session; use CGI::Carp qw(fatalsToBrowser); use File::CounterFile; use Data::Dumper; use DBI; #use DBD::mysql;

    When the sale is being inserted into the database this is the error message:

    install_driver(mysql) failed: Can't load '/home/jeffer36/perl5/lib/perl5/x86_64-linux/auto/DBD/mysql/mysql.so' for module DBD::mysql: /home/jeffer36/perl5/lib/perl5/x86_64-linux/auto/DBD/mysql/mysql.so: undefined symbol: PL_perl_destruct_level at /usr/lib64/perl5/DynaLoader.pm line 200, <BUYERFILE> line 88. at (eval 16) line 3 Compilation failed in require at (eval 16) line 3, <BUYERFILE> line 88. Perhaps a required shared library or dll isn't installed where expected at ../auction/buyit.pl line 284</P.

    Line 284 in buyit.pl is:

    my $dbh = DBI->connect("DBI:mysql:$db:$server", $userid, $passwd);

    I have search the web for information on this error message and have come up empty. Does anyone have suggestions on how to correct the problem?

The process cannot access the file because it is being used by another process?[Problem disappeared!]
6 direct replies — Read more / Contribute
by BrowserUk
on Apr 27, 2016 at 17:44

    Update!

    Before I posted, I tried a bunch of things and ran it at least a dozen times getting the exact same failure. I the cut the script of the top of the data, renamed the file .dat and stuck the script into a new file of the old name. I changed <DATA> to <> and supplied the data file on the command line and it worked first time.

    I've just recreated the all in one script; and now it runs perfectly. I have no explanation for the error or the cure; but I suspect 1nickt called it.

    Thanks for your help guys.


    I'm getting this error when I try to run the following code:

    C:\Motor>parseAns.pl The process cannot access the file because it is being used by another + process.

    Another process is accessing the DATA pseudo-handle?

    The data is large 2.4 million, but I'm sure I've processed much larger datasets from <DATA> before; and I've commented out the storing of the data to the array, so it isn't running out of memory. Cluebats anyone?

    #! perl -slw use strict; my @data; my $i = 0; #$data[ $i++ ] = [ split ' ' ] ++$i while <DATA>; print $i; __DATA__ -69.282032302755084 40.000000000000014 0 -1 -69.123493781255831 39.908467741935496 -1.4443382565142906e-006 -1 -68.748013135538145 40.911009397420145 0 -1 -68.964955259756593 39.816935483870985 -2.990721348858345e-006 -1 -68.370049396495517 40.298149116372635 -6.3944096502804299e-006 -1 -68.202015544462682 41.814890597403163 0 -1 ... 2.4 million lines omitted.

    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
    In the absence of evidence, opinion is indistinguishable from prejudice.
Read Directory and getlines in .csv
3 direct replies — Read more / Contribute
by Anonymous Monk
on Apr 27, 2016 at 10:57

    Hello. I am new to Perl and programming. I have written a perl program to get the names of .csv files in a different directory which were uploaded to the server today only, by modified date. These files are 13+ MB so I don't want to copy and move them to my new directory. I also wrote another perl program which will read a file and get only the lines with what I want and place those lines in a .csv with the criteria I need for my final report. My question is: How do I integrate these two programs into one program without moving these large files from the directory they are housed in. Is this possible?

    #This is the program to get the .csv file names from the directory: #!/usr/bin/perl use strict; use warnings; use File::stat; use Text::CSV_XS; use IO::File; my $dirname = "Daily_QoS_CPD_link"; opendir(DIR, $dirname) or die "Not able to open $dirname $!"; # Create an array, Open the directory, get only .csv's modified in the + last 24 hours my @dir = sort { -M "$dirname/$a" <=> -M "$dirname/$b"} grep /\.csv/, $dirname ne '.' && $dirname ne '..', readdir + DIR; rewinddir(DIR); # create a list of csv's for today into a text file. my $One_day = 86400; foreach my $list (@dir){ my $diff = time()-stat("$dirname/$list")->mtime; if( $One_day > $diff){ open FILE, ">>CPD_Files.txt" or die "Not able to open f +ile $!"; print FILE "$list\n"; } } closedir DIR; close FILE; # This is the code for getting the lines from the .csv's that I need #!/usr/bin/perl use strict; use warnings; use Text::CSV_XS; my $Finput = "cpd_link_ABC_cpddrops_50_300300000.csv"; my $Foutput = "data0426-2.csv"; open my $FH, "<", $Finput; open my $out, ">", $Foutput; my $csv = Text::CSV_XS->new({binary => 1, eol => $/ }); while(my $row = $csv->getline($FH)) { my @fields = @$row; if ($fields[2] eq "DROPPED-10" || $fields[2] eq "CALL_START" | +| $fields[2] eq "CALL_END") { $csv->print($out, $row); } } close $FH; if (not $csv->eof){ $csv->error_diag(); }
Comparing Lines within a Word List
9 direct replies — Read more / Contribute
by dominick_t
on Apr 26, 2016 at 15:54
    Hello all-- New to Perl, new to this forum. Many thanks in advance for reading and offering help. I have a background in mathematics and have done a bit of programming, mostly for specific tasks that lead me to learn just enough of a language to achieve them. So I wouldn't call myself thoroughly conversant in any language. I am, however, interested in learning Perl more deeply, as I have some long-term projects that will require managing and searching through word lists in creative ways. I've been reading the O'Reilly book Learning Perl, but I have a specific problem that I need to solve somewhat urgently, and I'm afraid I haven't learned enough Perl yet to even attempt some code that could do it. So, here's the problem: I have a long list of text strings saved in a .txt file. What I am interested in are pairs of words that are exactly the same, except in one position . . . in particular, where one word has, say, an R, the other word has, say, an S. So if the word list was a standard dictionary and I ran the code on it, the output would include the pairs RAT and SAT, also RATE and SATE, also BARE and BASE, also BARR and BARS. This strikes me as something that should be possible using regular expressions in a Perl script. Am I right about that? If so, and if it's pretty easy for an expert to write some code that will do this, I would be much obliged, not just because I need a speedy solution to this question due to a deadline, but also because it will give me a great piece of example code to help me in getting my hands dirty learning Perl. All best-- Dominick
Fast provider feeding slow consumer
3 direct replies — Read more / Contribute
by leostereo
on Apr 24, 2016 at 12:11

    Hi friends, a week ago I had to deal with this situation where a very fast consumer is feeding a slow consumer.
    I posted my solution using forks wich was working fine running on a test server but when I put it to run on a production server it crashed.
    Testing server is a very old box running centos 6 and production server is a Virtual machine on vmware plataform running Oracle RH ver 7.
    Instead of figure out why it was running fine on one machine and crashing in the other I decided to go for a parallel solution.
    Some users seggested my to read about parallel preforking so I came with these piped scripts:
    ./lines_dispacher.pl | ./lines_consumer_parallel2.pl

    I want to say that Im running both scripts using pipes for two reasons:
    First: I can not merge them ... I don't know how to do it, so some help on this would be great.
    Second: I realized that this way I can use tee command and analize both outputs.
    I would like to share both scripts so you can help me to improve them or maybe to suggest other alternatives to do this task. Thanks


    ##################################lines_dispacher.pl: #!/usr/bin/perl use IO::Socket::INET::Daemon; use Proc::Daemon; use Proc::PID::File; use IO::Handle; STDOUT->autoflush(1); my $host = new IO::Socket::INET::Daemon( host => '172.24.3.208', port => 7777, timeout => 20, callback => { data => \&data, }, ); $host->run; sub data { my ($io, $host) = @_; my $line = $io->getline; chomp($line); return 0 unless $line; print "$line\n"; return !0; } ###########################################lines_consumer_parallel2.pl +: #!/usr/bin/perl use DBI; use Parallel::ForkManager; my $pm = Parallel::ForkManager->new(10); $forks =1; while(<>){ $pm->start() and next; # Parent nexts ### my ($type, $ip, $mac, $bsid, $datecode) = split(',', $_); $cpe=$ip; $mac=~s/-//g; $community='public'; $snmp_rssi = '.1.3.6.1.4.1.9885.9885.1.2.0'; $output=qx(snmpwalk -v2c -t1 -c $community $cpe $snmp_rssi + 2>&1); #this is the task that delays the consumer process. if( $output eq "Timeout: No Response from $ip" ) { $rssi=0; $error='SNMP not responding. Upgrade firmware'; } else { @result=split(/:/,$output); $rssi=$result[3]; $rssi=~s/ //g; $rssi=~s/\n//g; if($rssi < -100) { $rssi=$rssi/100; } $rssi=int($rssi); } $dbh = DBI->connect("DBI:mysql:database=cpe_info;host=172.24.3.207;por +t=3306","account_process","neting.!"); $query = "INSERT INTO cpe_info(mac,ip,bsid,rssi) VALUES". "('$mac','$ip','$bsid','$rssi')". "ON DUPLICATE KEY UPDATE ip='$ip',bsid='$bsid',rssi='$rssi'"; $sth = $dbh->prepare($query); $sth->execute(); $dbh->disconnect(); print "we are on fork number $forks\n"; $forks++; ### $pm->finish(); }

    Last comment: I was also trying to print the fork number a the end of the consumer script. I did not get the expected output since all the lines prints "1" but I accidentally realized that it was the correct out since it is running on a different process. So other goal for me is to learn how can I get the forks number. Regards.

PerlXS typemap and reference counting
1 direct reply — Read more / Contribute
by joyrex2001
on Apr 23, 2016 at 18:27
    Hello Monks,

    I am working on a perl5 adapter for the gRPC api, using perlxs (https://github.com/joyrex2001/grpc-perl). In this implementation I am passing xs-objects as input to other xs-objects. The problem I have it that when I use these objects, the reference count is not increased, and the objects are de-scoped to early.

    As an example, consider the following setup:
    Grpc::XS::Call T_PTROBJ Grpc::XS::Channel T_PTROBJ Grpc::XS::Timeval T_PTROBJ
    In the call constructor, I pass both a Channel and Timeval instance, see:
    Grpc::XS::Call new(const char *class, Grpc::XS::Channel channel, \ const char* method, Grpc::XS::Timeval deadline, ... ) PREINIT: CallCTX* ctx = (CallCTX *)malloc( sizeof(CallCTX) ); ctx->wrapped = NULL; CODE: ## some code removed ## ctx->wrapped = grpc_channel_create_call( channel->wrapped, NULL, GRPC_PROPAGATE_DEFAULTS, completion_queue, method, host_override, deadline->wrapped, NULL); RETVAL = ctx; OUTPUT: RETVAL
    When the constructor is called, the timeval object is already out-of-scope, and gets dereferenced by perl. However, this is actually not correct, since it's only out of scope in perl, not in my xs scope.

    I think the best way forward would be to increase the refcount of the object so the object is not destoyed yet, any tips how to do that? Or maybe there is another approach?

    Gr., Vincent
Inline-Java Module Errors
2 direct replies — Read more / Contribute
by dorianwinterfeld
on Apr 22, 2016 at 10:00
    Most Wise Monks - I am maintaining some CGI code that uses the Inline-Java module. We are running ActiveState Perl on Windows and we are upgrading from Windows2003 to Windows2012. We installed ActivePerl5.22 and all the modules that the code requires. I testing and getting these error messages:

    Can't find running JVM and START_JVM = 0 at C:/Perl64/site/lib/Inline/Java.pm line 478.
    BEGIN failed--compilation aborted at /wwwroot/ELS_Applications/cgi-bin/fin_reg_el/rel2/FR_jdbc_perl_facade.pm line 21.

    Here is line 478 of Java.pm:
    $JVM = new Inline::Java::JVM($o);

    Java is running:
    C:\glassfish4\jdk7\lib>java -version
    java version "1.7.0_21"
    Java(TM) SE Runtime Environment (build 1.7.0_21-b11)
    Java HotSpot(TM) 64-Bit Server VM (build 23.21-b01, mixed mode)

    Do you have any suggestions as to how I can debug this?
    - Dorian Winterfeld
    dorian.winterfeld@gmail.com

PAR::Packer : Can't use the generated exe if there is no installation of perl
5 direct replies — Read more / Contribute
by Psylo
on Apr 22, 2016 at 09:50
    Dear Monks,

    I am trying to make standalone exe of my perl scripts. I have chosen Par::Packer to do that because it is free.
    To begin with, here is my environment :
    -This is perl 5, version 22, subversion 1 (v5.22.1) built for MSWin32-x64-multi-thread installed with Strawberry Perl
    -PAR::Packer 1.0.31
    -Windows 7

    So now, what's the problem ?
    I have managed to create an executable which is working fine with the command line pp -o test.exe test.pl
    When I give it to my colleagues, they have the error 0xc00007b "the application was unable to start correctly".
    I have managed to reproduce the issue by deleting some of my PATH values : C:\Strawberry\c\bin and/OR C:\Strawberry\perl\bin
    After reading some forums I have tried pp -c -o test.exe test.pl or pp -c -x test.exe test.pl but I always have the same issue...


    For me PAR::Packer is supposed to bundle perl in the exe. But am I suppose to add something to the delivery ? I didn't find anything on the internet about that that's why it sounds strange to me. Do I have to add some dlls which -c and -x options didn't find ? How to know which one I have to pick ?

    Just in case, my colleagues all have Windows 7 installed.
PAR::Packer to create an application without a decompression into temp?
2 direct replies — Read more / Contribute
by bubnikv
on Apr 22, 2016 at 05:12
    Hello Perl Monks. We are developing an application using Perl, for which we need to create an installer targeting Windows and Mac OS. I was experimenting with PAR::Packer and with Cava. The Cava installer does roughly what I would expect, but as far as I am known the Cava installer only works with the Citrus Perl on MacOS and the Citrus Perl was abandoned couple of years ago. On the contrary, PAR::Packer works with the up to date Perl distributions. Now there is an inconvenience of the PAR::Packer. It decompresses into a temp directory on startup. To distribute an application by an installer this way does not make much sens. In case of an executable produced by the PAR::Packer, most of the code of a larger Perl application is being kept on the hard drive twice: Once inside the PAR archive, second time in the temp cache. Is there a way to reduce the redundancy? I would like to install the temp cache with my installer and remove the already decompressed files from the PAR archive. Thanks, Vojtech
New Meditations
Regular Expreso 2
2 direct replies — Read more / Contribute
by choroba
on Apr 26, 2016 at 16:24
    As you might have noticed, I like programming puzzles and brain teasers. But I hadn't participated in a real public contest... until today. I registered to Regular Expreso 2 on HackerRank. The participants had 24 hours to solve 8 tasks, Perl was among the supported languages. The contest was regular expression-centered, your code had to always end the same:
    $Test_String = <STDIN> ; if($Test_String =~ /$Regex_Pattern/){ print "true"; } else { print "false"; }

    The top 10 contestants (most points + shortest time) won a T-shirt. Once I realised there were more then 10 people with the full score, I knew I wasn't getting one, but I still wanted to get the full score.

    But I had no idea how to solve one of the tasks: the input was a long string of zeroes and ones. Your regex had to recognise whether the string encoded two binary numbers in the following way: when you reversed the string and extracted the odd digits, you got a number three times greater than the one built from the remaining digits. For example,

    1110110001 => 00111, 10101 7 21 3 * 7 = 21, accept!

    I wrote a short script to generate some examples, but I wasn't able to find the pattern. Moreover, the regex couldn't have more than 40 characters!

    Then I remembered Perl has a way to run code in a regex: the (?{...}) pattern. I read the relevant perlre section several times and tried something like the following:

    use bigint; sub check { my ($bin, $three) = ('0b') x 2; my $s = $_; while ($s) { $three .= chop $s; $bin .= chop $s; } return oct $three == 3 * oct $bin } $Regex_Pattern = qr/(?{ check() })/;

    The problem here is that (?{...}) always matches. Fortunately, you can use the code pattern as the condition in

    (?(condition)yes-pattern|no-pattern)

    As the yes-pattern, I used ^ which always matches, and (*FAIL) for the no-pattern:

    $Regex_Pattern = qr/^(?(?{ check() }) ^ | (*FAIL) )/x;

    The qr adds some characters, so to be sure I don't overflow, I renamed the subroutine to c and golfed the solution to

    '^(?(?{c()})^|(*F))'

    I got the full score! Would you call such a solution cheating? On one hand, I see that's not what the organisers wanted me to do, on the other hand, that's what Perl regular expressions give you. In fact, with the "start or fail" pattern, I can solve almost any problem with a regular expression!

    ($q=q:Sq=~/;[c](.)(.)/;chr(-||-|5+lengthSq)`"S|oS2"`map{chr |+ord }map{substrSq`S_+|`|}3E|-|`7**2-3:)=~y+S|`+$1,++print+eval$q,q,a,
Good practice: A case for qr//
1 direct reply — Read more / Contribute
by LanX
on Apr 25, 2016 at 13:28
    Just wanted to share why using qr// to store regexes is usually a better idea than using strings...

    Today I was told to debug why a night batch failed to complete in the last weeks ...

    As it turned out filenames where checked with a list of hardcoded regexes and one of them had a typo. Instead of filter => ".*pl|.*txt" it had ".*pl|*.txt" which was hard to spot among many other regexes and caused a runtime error.

    Now using qr would have caused filter => qr/.*pl|*.txt/ to fail immediately at compile time.

    And since my colleagues use Komodo which runs perl -c at background (aka flymake-mode in emacs) this would have meant noticing the typo instantly while editing.

    HTH!

    Cheers Rolf
    (addicted to the Perl Programming Language and ☆☆☆☆ :)
    Je suis Charlie!

    PS: not wanna talk about the other flaws, like why extensions are checked with handcrafted regexes or why exitcodes from batches weren't checked...

Bullish on Moose, how about you?
7 direct replies — Read more / Contribute
by nysus
on Apr 22, 2016 at 16:44

    I've been a programming dabbler on an off for the past sixteen years or so (not counting the year I learned Apple Basic as a kid). I've recently gotten heavily back into programming for a couple of largish personal projects; most recently a event aggregator for the community where I live. For the last project, I decided to take a stab at doing OO Perl because too often I found that my procedural code for larger projects often degraded into spaghetti code as I dropped in more and more code to address all the edge cases that inevitably cropped up.

    Fortunately, I had worked some with "old school" OO Perl in the past, just to get familiar with it. I had also played a little with other OO-oriented languages so was pretty familiar with most of the concepts. Still, I had to dig out my worn copy Damian Conway's "Object Oriented Perl" book written back in 2000 to refresh myself. But after writing a fairly modest program with old school OO Perl, it became apparent that it's probably not practical for a less experienced programmer like me to use old school OO Perl for bigger, more complex projects like the event aggregator I was trying to build. I had to waste too much effort worrying about whether I was using Perl properly to implement OO design and grok all kinds of advanced programming techniques to make it all work.

    Then some Monks here recommended that I try Moose. I had never heard of Moose but, man, I am extremely glad I took their advice. Over the past month or so that I have been working with Moose, I found it to take a lot of the tedium out of programming and it has made it a much more joyful and less frustrating experience for me. When I see my code taking on the form of stringified pasta, I can easily break the code down into discrete chunks that makes it much easier for me to focus on the big picture of the program's structure and less on the detailed ins and outs of what the code is doing. I have a lot left to learn with Moose but there seems to be little question at this point in time that Moose will make me a better, more productive programmer and will help me create much more maintainable code. I can't see myself using anything but Moose for all but the simplest of scripts.

    I'm wondering if other Monks who have tried Moose have similar feelings as me. Maybe I was just horrible procedural programmer. Have you tried Moose and then abandoned it? Right now I don't see any downsides to using it (my scripts don't have to run fast), but I'd be interested to know if you have a different take on Moose. Are there any limitations inherent to Moose that I should be aware of?

    $PM = "Perl Monk's";
    $MCF = "Most Clueless Friar Abbot Bishop Pontiff Deacon Curate";
    $nysus = $PM . ' ' . $MCF;
    Click here if you love Perl Monks

New Cool Uses for Perl
Saving some seconds.
No replies — Read more | Post response
by BrowserUk
on Apr 26, 2016 at 15:14

    After posting my solution to 1161491 I had some 'free time' so I was playing.

    My REPL which (can) time chunks of code for me automatically, produced some depressing numbers:

    C:\test>p1 [0]{0} Perl> use Algorithm::Combinatorics qw[ permutations ];; [0]{0.00943684577941895} Perl> $iter = permutations( [ reverse 1 .. 9 +] );; [0]{0.000318050384521484} Perl> printf "\r%s\t", join '', @$_ while de +fined( $_ = $iter->next );; 123456789 [0]{22.5874218940735} Perl>

    22.5 seconds to generate 9! = 362880 permutations seemed longer than I would have expected; so then I wondered how much of that was down to the generation and how much the formatting and printing:

    [0]{0} Perl>@d = permutations( [ reverse 1 .. 9 ] );; [0]{2.31235218048096} Perl> [0]{0} Perl> printf "\r%s\t", join '', @$_ for @d;; 123456789 [0]{18.9919490814209} Perl>

    So less than 2.5 seconds for the generation and almost 19 for the formatting and printing. (Leaving 1 second 'lost in the mix'.)

    Of course, that one line for printing is doing rather a lot. Unpacking the contents of the anon arrays to a list; joining the list of digits into a string; and then interpolating that into another string before writing it out. So then I wondered about the cost of each of those elements of the task.

    Looking at the code I saw that I could avoid 300,000 calls to each of join and printf by interpolating the lists from the array references directly into a string; provided I set $" appropriately:

    [0]{0} Perl> $"=''; $_ = "@$_" for @d;; [0]{1.93835282325745} Perl>

    That was a nice saving, so then I thought about writing the output. Rather than use a loop: print for @d; which means calling print 300,000 times -- with all the calls into the kernel that involves -- why not join those 300,000 strings into a single string (one call to join) and the output it with a single call to print:

    [0]{} Perl> $d = join "\r", @d;; [0]{0.0442740917205811} Perl> print $d;; 123456789 [0]{4.72821307182312} Perl>

    Summing the individual parts came out to ~10 seconds rather than the original 22.5. So let's put it all together and verify it:

    [0]{0} Perl> $"=''; @d = permutations( [ reverse 1 .. 9 ] ); $_ = "@$_ +" for @d; $d = join "\r", @d; print $d;; 123456789 [0]{9.26112604141235} Perl>

    Sure enough. Under 10 seconds; over 50% saved. Nice.

    Can we go further?:

    [0]{0} Perl> $"=''; print join "\r", map "@$_", permutations( [ revers +e 1 .. 9 ] );; 123456789 [0]{10.0599029064178} Perl> [0]{0} Perl> $"=''; print join "\r", map "@$_", permutations( [ revers +e 1 .. 9 ] );; 123456789 [0]{10.086268901825} Perl>

    And the answer is no. Sometimes the elimination of intermediate variables -- especially intermediate arrays when the alternative is several long lists -- backfires.

    Still. Another 5 minutes of 'idle time' waiting for another long simulation run occupied with an fun exercise, the lessons of which just might stay in my consciousness long enough to become influencial in the code I write in future.

    A few otherwise idle minutes spent now, saving a few seconds on something that doesn't really benefit from that saving; that just might save me hours or days if the lessons happen to be applicable to my next project; or the one after that.

    (If only I could apply those same level of savings to the simulation software I using -- open source, but I cannot compile it locally -- as perhaps then I would be lookng at a 20 hour wait for its completion rather than the 40+ I have in prospect :().


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
    In the absence of evidence, opinion is indistinguishable from prejudice.
Log In?
Username:
Password:

What's my password?
Create A New User
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others cooling their heels in the Monastery: (3)
As of 2016-04-29 04:15 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    :nehw tseb si esrever ni gnitirW







    Results (438 votes). Check out past polls.