Beefy Boxes and Bandwidth Generously Provided by pair Networks
"be consistent"

The Monastery Gates

( #131=superdoc: print w/replies, xml ) Need Help??

Donations gladly accepted

If you're new here please read PerlMonks FAQ
and Create a new user.

New Questions
Reusing Compiled Perl or Opcode for Apache Server
2 direct replies — Read more / Contribute
by mlodato
on Aug 15, 2017 at 20:36

    O wise ones, I have come to you in humility to ask a question for which I have little surrounding knowledge. Please take pity on me and ask for clarification so I can seek information to bring back to you.

    I work on a project that has a Perl back end behind an Apache server. The development servers are very slow. For certain production-like servers, we precache our static content and preload our Perl modules. Doing this takes a long time. It would be nice if it didn't take so long a second time if the Perl code hasn't changed.

    I was wondering - is there was a way to preload all of the Perl modules once and then serialize it in some way to be read in more quickly a second time?

    Note that I don't yet know exactly what "preloading Perl modules" means, but I am actively looking into it and maybe you don't need that information to answer the question because preloading is a common term.

    I have seen several posts saying that compiling Perl into C-like code is not yet a thing. That's fine, I'm not looking to optimize or hide the code. I have seen several posts saying that I can generate an executable with PAR::Packer. Maybe this can be used? I have seen several posts saying that Perl is first parsed into opcode before being run. I'm not sure if this is just for Perl 6, but if it's for Perl 5, I see no reason for that opcode to not be reusable...right? I have seen a post explaining that Perl can't be parsed. I find this confusing.

    Edit: After some digging it looks to my untrained eyes like "preloading" just calls use, load_class from Class::Load, and ->new for each module

sed command with variable in perl
3 direct replies — Read more / Contribute
by samira_saber
on Aug 14, 2017 at 15:38
    I have a problem with the sed command in perl system( q {sed -i — "s/SOL..............*/SOL $sol1/g"}); this is the command that i have. I know that i should use double coat and backslash for the variable to be run. But it cant be run and if it is the variable cant be printed. How should I use it?
can't open file for appending
2 direct replies — Read more / Contribute
by ytjPerl
on Aug 14, 2017 at 11:57
    Hi folks, I have code as follow to start up my server and output log to a file. I've tested and it was working. but Today when I tried to run it again, I got error 'Can't open 'D:/log_script/tuxedo_logs/startup.20170814xxxxxx.log' for appending: 'No such file or directory. I am really confused, I assume 'my $input = "D:/log_script/tuxedo_logs/startup" . DATETIME . ".log";' this code would generate this file for writing.
    use strict; use warnings; use lib "D:/App/Perl/lib//"; use autodie; use Capture::Tiny 'capture_merged'; use POSIX; use constant DATETIME => strftime("%Y%m%d%H%M%S", localtime); my $input = "D:/log_script/tuxedo_logs/startup" . DATETIME . ".log"; open ( my $file, ">>", $input )or die "cannot open $!"; chdir "D:/server/setup"; print $file capture_merged { system('setenv.cmd&&tmboot -y') }; close($file); `
Cant locate object method CAM:PDF. I'm doing something dumb
1 direct reply — Read more / Contribute
by jorba
on Aug 14, 2017 at 08:18
    I cribbed some code from the internet to extract data from a pdf using CAM:PDF. It works fine. I then adapted it as part of a larger program. That doesn't work. So I'm doing something dumb. Appreciate it if someone could point out what it is.

    Running on Windows.

    Here's the cribbed code. It works fine, printing the data to the console.

    use strict; use warnings; use CAM::PDF; use LWP::UserAgent; my $pdf_filename = 'C:\Users\Jay\Desktop\SBS DEV\test.pdf'; convert_pdf_to_text(); sub convert_pdf_to_text { use CAM::PDF::PageText; my $pdf_filename = 'C:\Users\Jay\Desktop\SBS DEV\test.pdf'; my $pdf = CAM::PDF->new($pdf_filename); my $y = $pdf->getPageContentTree(1); print CAM::PDF::PageText->render($y); }

    Here's my code in full. The relevant bit is the line print CAM::PDF::PageText->render($content);

    use strict; use warnings; use DBI; use CAM::PDF; my $db; my $sql; my $src; my $tgt; my $file; my $cnt; my @row; sub ConvertPDFToText { my $infn; my $fh; my $pdf; my $content; $infn = "$_[0]\\$_[2]"; open($fh, '>',"$_[1]" . "\\" . "Archive.txt"); print "filename $infn\n"; print "xx\n"; $pdf = CAM::PDF->new($infn); $content = $pdf->getPageContentTree(1); print CAM::PDF::PageText->render($content); return ""; } #Get db handle; $db = DBI->connect('DBI:mysql:SBS_Dev', 'DBProcess','ThhuSd73MIWAW +aY6') or die 'Cant Connect to DB'; # Get file directories $sql = $db->prepare('SELECT SRC_DIR, TGT_DIR FROM EXP_EXTRACT_CNTL + WHERE ID = 1') or die 'Couldnt run cntl sql: '. $db->errstr; $sql->execute(); @row = $sql->fetchrow_array(); ($src, $tgt) = @row; print "source $src\n"; print "target $tgt\n"; if ($sql->rows == 0) { die 'Control info not found'; } #Process Files from Source Directory opendir(DIR, $src) or die "Cant open Dir: $!"; while (($file = readdir(DIR))) { if ($file ne '.' and $file ne '..' and $file ne "Archive") { print "file $file\n"; #get data out of file ConvertPDFToText($src, $tgt, $file); $cnt = $cnt + 1; # Move file to archive rename "$src\\$file" => "$tgt\\$file"; } } closedir(DIR); print '$cnt files processed\n';

    Here's the output from running the second one

    C:\Users\Jay\Desktop\SBS DEV\CODE\perl>.\ source C:\Users\Jay\Desktop\SBS DEV\Data\Receipts target C:\Users\Jay\Desktop\SBS DEV\Data\Extracted file home depot large 2.pdf filename C:\Users\Jay\Desktop\SBS DEV\Data\Receipts\home depot large 2 +.pdf xx Can't locate object method "render" via package "CAM::PDF::PageText" a +t C:\Users \Jay\Desktop\SBS DEV\CODE\perl\ line 31. C:\Users\Jay\Desktop\SBS DEV\CODE\perl>

    As far as I can see, the relevant lines of code are identical as are the lines needed to get there. So what am I missing?

    Thanx J.
hiccoughing here documents
7 direct replies — Read more / Contribute
by Anonymous Monk
on Aug 11, 2017 at 17:39

    Is there a passably elegant way to persuade a Perl here document to repeat lines, e.g. under control of an external foreach?

    I'm trying to generate a sequence of routing commands like

    ip rule add fwmark 0x0201/0xffff table 201 ip rule add fwmark 0x0200/0xffff table 200 ip route add default via dev eth1.201 table 201 ip route add default via dev eth1.200 table 200

    and while putting them in a here document is by far the most readable approach it would be a great improvement if the repeated lines- where the count might vary- could be generated on the fly.

Passing a filehandle to subroutine
4 direct replies — Read more / Contribute
by Kagami097
on Aug 10, 2017 at 11:58
    Hello All, I am facing an issue while passing an open filehandle to a subroutine .The code goes something like below.

    my $in = "class.txt"; open F , $in or die; my $line =<F>; while($line){ if ($line =~ /xyz/){ SubOut(\*F); } $line =<F> ; } close F; sub SubOut{ my $fh = shift; my $s_line = <$fh>; my @ref; push (\@ref, $s_line); while($s_line){ print STDOUT $s_line; $s_line = <$fh>; } }
    When i execute this code , i end up losing the line that matches in the main program.How can i pass an open filehandle to a subroutine and store the line that matches in the main program.In the code above i am losing that line cause of my $s_line = <$fh>.
What can I do to improve my code - I'm a beginner
5 direct replies — Read more / Contribute
by Anonymous Monk
on Aug 10, 2017 at 05:27
    Hi there, I am working with some data in which I need to take an average every 12 lines (every 5 seconds) and print that average out along with the time that the data was recorded (so one reading every minute). I have done this in 2 ways but neither are particularly neat or speedy. I am relatively new to Perl so I apologise to anyone who thinks I have butchered the code. The data looks as follows: The following is my first method - which as you can see is very chunky and just a bit of a bodge job. My second code is a bit more streamline but it takes longer to run. I know that is probably a lot to read and probably doesn't make sense, and I apologise. The scripts get the job done but I am looking for help in improving my skills and making everything clearer. The 1st code takes 1.8 seconds to run (file has 120960 lines) and second takes 9 seconds to run for the same file. Any help would be greatly appreciated, as this needs to be run for about 100,000 files.
New Meditations
A Few Impressions from Amsterdam
3 direct replies — Read more / Contribute
by haukex
on Aug 13, 2017 at 11:03

    I was originally going to post something else for my 1000th node, but I'll save that for later, since I think this is fitting: As many of you probably know, The Perl Conference in Amsterdam ended on Friday. It was my first Perl event, and a great experience! I was very happy to finally meet some of you in person, choroba, LanX, Corion, Tux, Laurent_R, and rurban, and I'm really sorry I didn't get to say bye properly since I had to leave a bit early. If I met some of you but haven't yet matched you to a username, I apologize and drop me a /msg.

    I enjoyed the talks by some of the greats like TimToady, TheDamian, Ovid, and brian_d_foy, some of which are already available as videos on Facebook (Update: and YouTube, thanks Corion), and I hear the full videos should be online within a month or so. For my favorite funny moment of the conference: First, watch Damian Conway's talk "Three Little Words", definitely worth it on its own (as is checking out the impressive PPR), and then go to the Lighting Talks from Aug 10th and skip ahead to around 1h39m15s in the video... :-)

    Update: Added direct links to the YouTube videos.

    Update 2: Some pictures can be found at Also updated links to YouTube.

How has Perl affected you?
3 direct replies — Read more / Contribute
by stevieb
on Aug 12, 2017 at 16:58

    Slow weekend afternoon, taking a break from packing up my life and doing a huge shift towards an entirely new adventure.

    I ran into Perl through my first non-contract job. I picked up an ISP that was barely more than a start-up, and with a book off of a shelf, I fell in love (Perl 4/early 5 days).

    I have come to appreciate the Perl community as a whole as a group who are loyal, dedicated and serious, all the while being able to take criticism quite well.

    I savored the day I became a Friar; it allowed me to take part in some decision making on this site, which imho is the de-facto place to find Perl experts.

    I've gone on to do a lot of interesting things, meet a lot of interesting people and help a lot of people in this language (and thanks to it, other ones as well).

    I'm coming up on my 9th birthday here, so while taking a breather from the physical duties of life, thought I'd once again share my appreciation for Perlmonks, and ask you, newbie or not, why you are invested in Perl, what it has done for you, and whether it has changed anything regarding how you approach other tasks/problems in your day-to-day.


High Performance Game of Life
5 direct replies — Read more / Contribute
by eyepopslikeamosquito
on Aug 11, 2017 at 08:49

    The universe of Conway's Game of Life is an infinite two-dimensional orthogonal grid of square cells, each of which is in one of two possible states, alive or dead. Every cell interacts with its eight neighbours, which are the cells that are horizontally, vertically, or diagonally adjacent. At each step in time, the following transitions occur:

    1. Any live cell with fewer than two live neighbours dies, as if caused by under-population.
    2. Any live cell with two or three live neighbour lives on to the next generation.
    3. Any live cell with more than three live neighbours dies, as if by overcrowding.
    4. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.
    The initial pattern constitutes the seed of the system. The first generation is created by applying the above rules simultaneously to every cell in the seed - births and deaths occur simultaneously, and the discrete moment at which this happens is sometimes called a tick (in other words, each generation is a pure function of the preceding one). The rules continue to be applied repeatedly to create further generations.

    -- Conway's Game of Life (wikipedia)

    I've recently been forced to become acquainted with Conway's Game of Life after being asked to assess various implementations of it submitted by job applicants. Rather than a chore, I actually found it very interesting. Not the GUI stuff, just the guts of the code. How to implement it cleanly, concisely, efficiently.

    I wrote a couple of solutions myself, in both Perl and C++11, which I present below. Though still a game of life novice, I've spent quite a bit of time on these solutions already - including some profiling and performance optimization of my C++ solution. My Perl solution is essentially a translation of the one I wrote first in C++.

    Having spent quite a bit of time solving this problem, I'd be interested to see how others go about it.

    The Challenge

    If you want to test yourself, and have a few hours to spare, you might like to have a go (without reading my solutions below) at implementing the above algorithm as a simple Organism class with methods:

    • insert_cells() - insert a list of live cells (x and y coordinates) into the Organism. This is used to set the starting state of the Organism.
    • count() - return the number of live cells in the Organism. This is used for verification by unit tests.
    • get_live_cells() - return the list of all cells currently alive in the Organism. This is also used for verification.
    • tick() - create the next generation of the Organism by applying the four rules above. This is where most of the hard work and performance challenges are.
    For this challenge, you don't need to be truly infinite, but you must handle cells with x and y co-ordinates in the -2 GB to 2 GB range (i.e. 32-bit signed integer x and y co-ordinates). In the interests of keeping the challenge smallish, you don't need to consider graphics, customizations, variations, just implement the four basic rules above.


    I also have a special interest in high performance computing, so hope to learn more about performance and scalability by the feedback I get from this node. I've already sought performance advice related to this challenge in Fastest way to lookup a point in a set.

Log In?

What's my password?
Create A New User
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others browsing the Monastery: (4)
As of 2017-08-16 18:20 GMT
Find Nodes?
    Voting Booth?
    Who is your favorite scientist and why?

    Results (272 votes). Check out past polls.