Beefy Boxes and Bandwidth Generously Provided by pair Networks
more useful options

The Monastery Gates

( #131=superdoc: print w/replies, xml ) Need Help??

Donations gladly accepted

If you're new here please read PerlMonks FAQ
and Create a new user.

New Questions
Perl 2 Excel
2 direct replies — Read more / Contribute
by newperlbie
on Aug 21, 2018 at 07:55
    Hi All, Thanks for the help on my previous perl to HTML query. Now,I am writing to an excel file and I have this code
    my $workbook = Spreadsheet::WriteExcel->new("output.xls"); my $worksheet = $workbook->add_worksheet("overview"); my $colCount = 0; my $rowCount = 0; open my $tf, "output.txt" or die "Cannot open file"; while(my $line = <$tf>) { if ( $line =~ /.*versions are SAME.*/) { my $header1 = $workbook->addformat(bold => 0, color => 'green', size => 10, merge => 0, ); $worksheet->set_column($colCount, $colCount, 30); print $line; #this is a dummy line $worksheet->write($rowCount,$colCount,$line,$header1); $rowCount++; } }
    All the lines from the text file are written to the excel but after a few lines(random) the coloring stops.The next lines are just written in black. What is my mistake?
How to PRINT CGI html table to a PNG file
3 direct replies — Read more / Contribute
by theravadamonk
on Aug 21, 2018 at 07:03

    Dear Monks,

    I wrote a CGI code to get the load averga of my CentOS server. It gives 1 minute , 5 minutes and 15 minutes Load averages.

    I can see it via URL http://ipaddress/cgi-bin/Sys_Load.cgi

    Now, What I need is to INSERT this Sys_Load.cgi url to my home.cgi code. My ulitimate goal is to access URL http://ipaddress/cgi-bin/home.cgi.

    Then, Everything in home.cgi should be displayed.

    I have already dispalyed other codes with ""

    I can dispaly them since it PRINTS a png image to the browser.

    here's my home.cgi

    #!/usr/bin/perl use strict; use warnings; use CGI ':standard'; my $date = localtime(); print "Content-type: text/html\n\n"; print "<body bgcolor=\"#c0e0f9\">"; print qq!<br/><center> <h1>Home</h1> <h3>$date</h3> <img src = "root_partition.cgi"><br/><br/><br/> <img src = "var_partition.cgi"><br/><br/><br/> <img src = "stacked_Bar_graph.cgi"><br/><br/><br/> <img src = "topsenders_hbar.cgi"><br/><br/><br/> <img src = "toprecipients_hbar.cgi"><br/><br/><br/></center>!; print "</body>";

    Here's my Sys_Load.cgi code. I think If can PRINT CGI html table to a PNG file, It will be OK. I am right aint't I? How can I do it? seeking help...

    #!/usr/bin/perl use strict; use warnings; use CGI ':standard'; use Sys::Load qw/getload uptime/; print "Content-type: text/html\n\n"; print "<body bgcolor=\"#c0e0f9\">"; my ($one_m, $five_m, $fifteen_m) = (getload())[0,1,2]; print "<br/>"; print "Load Average \n"; print '<table style=width:20%><tr><th bgcolor=#FFDD00 height=25>1 Minu +te</th><th bgcolor=#FFDD00 height=25>5 Minutes</th><th bgcolor=#FFDD0 +0 height=25>15 Minutes</th></tr>'; print "\n <tr style=\"font-family:verdana\" bgcolor=\"#FFFFFF\"><t +d>$one_m</td><td>$five_m</td><td>$fifteen_m</td></tr>"; print '</table>'; print "</body>";
Find and replace based on unique identifier
2 direct replies — Read more / Contribute
by oryan
on Aug 20, 2018 at 13:04

    I need to find and replace 2 lines of code related to culverts in a model text file. I have a model with all the lines in the right order with the OLD culvert values, and another text file with the NEW culvert values but in the wrong order. What the script does currently:

    1. In the model, finds line beginning with text "Connection Culv". This is the line of text I need to replace.
    2. Finds the next line after "Connection Culv" that starts with "Conn Culvert Barrel" - this is the unique identifier for the replacement.
    3. Pulls new values of "Connection Culv" from text file and replaces them in model.
    4. Repeats for all instances of Connection Culv and then saves new file.

    Instead of ONLY replacing the line that begins with "Connection Culv" I need it to replace that line and the following line ( 111 111, 222 222, etc), but I can't get it to work.


    Connection Culv=This is Line1 111 111 Conn Culvert Barrel=Culvert1 * Connection Culv=This is Line2 222 222 Conn Culvert Barrel=Culvert2 * Connection Culv=This is Line3 333 333 Conn Culvert Barrel=Culvert3 *

    Connection Culv=This is Line3 - New text here 333 333 This should be new too Conn Culvert Barrel=Culvert3 * Connection Culv=This is Line1 - New text here 111 111 This should be new too Conn Culvert Barrel=Culvert1 * Connection Culv=This is Line2 - New text here 222 222 This should be new too Conn Culvert Barrel=Culvert2 *

    Connection Culv=This is Line1 - New text here 111 111 Conn Culvert Barrel=Culvert1 * Connection Culv=This is Line2 - New text here 222 222 Conn Culvert Barrel=Culvert2 * Connection Culv=This is Line3 - New text here 333 333 Conn Culvert Barrel=Culvert3 *

    Connection Culv=This is Line1 - New text here 111 111 This should be new too Conn Culvert Barrel=Culvert1 * Connection Culv=This is Line2 - New text here 222 222 This should be new too Conn Culvert Barrel=Culvert2 * Connection Culv=This is Line3 - New text here 333 333 This should be new too Conn Culvert Barrel=Culvert3 *

    There are hundreds of these replacements that need to be made throughout the model. I feel like this should be simple, but nothing has worked. Here is the code I have currently that works for replacing the single line "Connection Culv" but not the following line. Any help is appreciated. Thanks.

    # HEC-RAS Replacement Perl Script # This will find and replace values in the HEC-RAS geometry file for m +odified culvet barrels. The process is: # 1. In existing model file (HECRAS_Ex.txt) find where there is a " +Connection Culv" (this is the line that needs to be replaced) # 2. It then down for the next Conn Culvert Barrel line (this is th +e unique identifier) # 3. It then takes from the new culvert file (culvNEW.txt) the new +"Connection Culv" line and replaces it in the existing HECRAS_Ex.txt +file. # 4. Repeats for all and then saves out Output_HECRAS.txt # Nomenclature for running Perl Script: # C:\MyDir> perl culvNEW.txt HECRAS_Ex.txt OutPut_H +ECRAS.txt # Read Existing HEC-RAS Geometry File (HECRAS_Ex) with Old Culvert Con +nection Attributes open (TEMPLATE, @ARGV[1]) or die; @HECRAS_Ex = <TEMPLATE>; close TEMPLATE; # Read New Culvert Data File (culvNEW) with new Connection culvert Att +ributes open (TEMPLATE, @ARGV[0]) or die; @culvNEW = <TEMPLATE>; close TEMPLATE; for ($i=0; $i<@HECRAS_Ex; $i++) { # only check lines starting with "Connection Culv" in the HECRAS_Ex fi +le if ($HECRAS_Ex[$i] =~ /^Connection Culv/) { #print $HECRAS_Ex[$i]; #look for Connection Culv backwards $iback=$i-1; while ($HECRAS_Ex[$iback] !~ /^Conn Culvert Barrel/) { $iback=$iback+1; } $local0=$HECRAS_Ex[$iback]; chomp($local0); # print $HECRAS_Ex[$iback]; for ($j=0; $j<@culvNEW; $j++) { # for ($j=0; $j<1; $j++) { $local = $culvNEW[$j]; chomp($local); # print $local; # Remove the trailing new line # chomp $local; # print ($local eq $HECRAS_Ex[$iback]); if ($local =~ /^$local0/) { # print "match"; $jforward=$j-1; while ($culvNEW[$jforward] !~ /^Connection Culv/) { $jforward=$jforward-1; } # print $culvNEW[$jforward]; # Perform substitutions of LG card $HECRAS_Ex[$i]=$culvNEW[$jforward]; # print $HECRAS_Ex[$i]; } } } } #write out the Geometry File based on the HECRAS_Ex file structure and + the new values in the culvNEW file open (OUT, ">" . @ARGV[2]) or die; # Write output print OUT @HECRAS_Ex; # Close OUT close OUT;
Write large array to file, very slow
4 direct replies — Read more / Contribute
by junebob
on Aug 20, 2018 at 10:04

    Hi, I have written a bit of perl, which is performing very slowly, so I'm hoping to get some advice here

    The script takes in any number of files, where all files have the format that each line starts with a 10 hexit hex count, followed by anything. The count on each line is always greater than the count value on the previous line. The task is to merge all the input files in to one file, in order. The input files can be quite large, 3GB or so. After a bit of googling I decided to put all the input files in an array, and put the result in a new array and finally write out the new array to a file. Mainly because I have access to machines with lots of RAM, so I thought if it's all chucked in to memory it'll be faster, and then I just dump the end result in to a file.

    It hasn't really worked out as I expected. The script got to the point where the final array is complete and it's starting to write out to the file after about an hour or so. However, just the writing to a file is taking many hours!

    Any suggestions as to how to improve my script? Thanks!

    #!/bin/env perl use strict; use warnings; use List::Util qw(min max); use Math::BigInt; my @filenames = @ARGV; #Define empty hash. This will be a hash of all the filenames. Within t +he hash each filename points to an array containing the entire conten +ts of the file, and an array of timestamps. my %all_files=(); #>32 hex to dec function sub hex2dec { my $hex = shift; return Math::BigInt->from_hex("0x$hex"); } #For each file on the command line, create a new hash entry indexed by + the filename. Each entry is an array containing the contents of the +file. foreach my $filename (@filenames) { open(my $handle, "<", "$filename") or die "Failed to open file $file +name: $!\n"; while(<$handle>) { chomp; my $fullline = $_; if($fullline =~ m/(\w+).*/) { #Store contents of line my $timestamp = $1; push @{$all_files{$filename}}, $fullline; push @{$all_files{"${filename}.timestamp"}}, $timestamp; } else { print "Unexpected line format: $fullline in $filename\n"; exit; } } close $handle; $all_files{"${filename}.neof"} = 1; } my $neofs = 1; my @minarray = (); my $min = 0; my $storeline = ""; my @mergedlogs = (); my $matchmin=0; my $line=0; while ($neofs == 1) { print "$line\n"; $line++; $neofs = 0; #First find the lowest count foreach my $filename (@filenames) { print "@{$all_files{\"${filename}.timestamp\"}}[0]\n"; my $tmpdec=hex2dec(@{$all_files{"${filename}.timestamp"}}[0]); print "$tmpdec\n"; push @minarray, hex2dec(@{$all_files{"${filename}.timestamp"}}[0]) +; } $min = min @minarray; @minarray = (); #For each file matching the lowest count, shift out the current line foreach my $filename (@filenames) { print "$filename $min"; $matchmin=0; if(hex2dec(@{$all_files{"${filename}.timestamp"}}[0]) == $min && $ +all_files{"${filename}.neof"} == 1) { $matchmin=1; $storeline = shift @{$all_files{$filename}}; shift @{$all_files{"${filename}.timestamp"}}; #Check if array is empty (i.e. file completed) if ( ! @{$all_files{$filename}}) { #If so, set not end of file to 0 $all_files{"${filename}.neof"} = 0; #Force count value to max so that it loses all future min batt +les push @{$all_files{"${filename}.timestamp"}}, "10000000000"; } #Push the line to the merged file. push @mergedlogs, "$storeline $filename"; } $neofs = $neofs || $all_files{"${filename}.neof"}; } } unlink "mergedlogs.txt"; foreach (@mergedlogs) { open FH, ">>mergedlogs.txt" or die "can't open mergedlogs.txt: $!"; print FH "$_\n"; close FH }
Error DBI::SQLite file not existent
3 direct replies — Read more / Contribute
by IB2017
on Aug 19, 2018 at 13:41

    Hello monks

    I am trying to improve the error handling of my application that uses SQLite. I have noticed that the following script is not producing an error, no matter what $database is (empty, a non existing database name, etc.). However, if $database does not exist I would have expected to see an error. This forces me to first check for the existence of $database. Is this a feature?

    use DBI; use strict; my $driver = "SQLite"; my $database = ""; my $dsn = "DBI:$driver:dbname=$database"; my $userid = ""; my $password = ""; my $dbh = DBI->connect($dsn, $userid, $password, { RaiseError => 1 }) or die $DBI::errstr; print "Opened database successfully\n";
Help with parsing command line to make more readable
4 direct replies — Read more / Contribute
by proxie
on Aug 19, 2018 at 13:24
    Hi folks. I work at a company where I don't normally do scripting. I tend to use Perl for like 3 days for some one off scripting need, and then promptly forget everything I learned. I was hoping someone could do this reg exp faster. Say I have a an automatically dumped command line like:
    execute test_number_1 -tex -tex_args -sub_args +debug_dir=./ -sub_args + +debug_dir=./ -constraint parity_en,random_en -sub_args '"' ruck=1 ' +"' -constraint dual_en -sub_args -cd -sub_args 2596.slow -sub_args te +st -seed 1 -tex_args- -opt 1 -tag 2
    Since it's difficult to parse what's really going on when it's all a single line, I instead would like to create a script that would take this command line as input and effectively print out something like the below in a kind of tree format:
    execute test_number_1 -tex -tex_args -sub_args +debug_dir=./ -sub_args +debug_dir=./ -constraint parity_en,random_en -sub_args '"' ruck=1 '"' -constraint dual_en -sub_args -cd -sub_args 2596.slow -sub_args test -seed 1 -tex_args- -opt 1 -tag
    Can someone help me get started on the most efficient reg exps to do this? There are certain anchors I can see in which to key off of in terms of knowing when to tab, but it's not consistent. For instance, can't always use the "-" reliably to know when to tab over.
Remove all duplicates after regex capture
4 direct replies — Read more / Contribute
by Maire
on Aug 19, 2018 at 05:45

    Hello, Monks. I'm hoping that you can help me with what is probably quite a simple problem, but which is completely stumping me!

    I'm trying to work with text files which are fairly disorganized in terms of structure, and which have been stored in a hash. What I want from each file is to extract the line of text which begins "title:#" and ends with a "#" and store this text in a scalar to be used later.

    The problem arises because, within some of the text files, there are multiple lines which begin and end with "title#" and "#" respectively. What distinguishes the "titles" I want is that they only ever appear in each text file once, whereas the "titles" I do not want appear at least twice (but sometimes three or four times) in the same text file.

    So this is the basic script that I am using, which prints out all the titles

    use warnings; use strict; my %mycorpus = ( a => "<blah blah blah blah title:#this is text I want 1# blah blah blah", b => "blah title:#this is text I do not want# blah title:#this is text I want 2# blah title:#this is text I do not want# blah", c => "blah blah title:#this is text I want 3# title:#this is text I do not want# title:#this is text I do not want# title:#this is text I do not want# blah", ); foreach my $filename (sort keys %mycorpus) { my $titles = ''; while ($mycorpus{$filename} =~ /title:#(.*?)#/g){ $titles = $1; print "$titles \n"; } }

    The script above, obviously, prints out all 8 captured lines which begin with "title", but my desired output is:
    this is text I want 1 this is text I want 2 this is text I want 3
Module::Metadata case sensitivity
2 direct replies — Read more / Contribute
by Anonymous Monk
on Aug 18, 2018 at 23:00
    perl -MModule::Metadata -MData::Dumper -le "print(Dumper(Module::Metad +ata->new_from_module('CGI')))" perl -MModule::Metadata -MData::Dumper -le "print(Dumper(Module::Metad +ata->new_from_module('cgi')))"
    In the second example searching for "cgi" returns the metadata for "CGI" but both module and filename keys contain "cgi" and are not valid as a package or file for the "CGI" distribution?
Help publishing my DB migration project
2 direct replies — Read more / Contribute
by juankpro
on Aug 16, 2018 at 19:56

    I started learning Perl a couple of months ago and started a project which I call Migrate so I can learn as much Perl as possible.

    The project allows managing DB schema changes (migrations) using Perl syntax for creating and removing tables, indexes, columns and constraints instead of plain SQL.

    The current implementation is using DBI and DBD::SQLite and there is also an implementation for Informix, but I made it in such a way other RDMS implementation can be added as separate libraries easily.

    I need the following:

    • Advice on how can I get a Perl partner to help me finish the tool. Specially the tests suite.
    • Get someone to review my code so I can be confident I'm in the right path
    • Advice on what are the next steps or what I'm still missing to get t publish my project in CPAN
    My project is posted here: There is already general documentation on the README file as well as an API in the repo Wiki. Please guide me. Thanks in advance
Bitbucket Pipelines and DBD::mysql
No replies — Read more | Post response
by zecat
on Aug 15, 2018 at 13:05
    Decided to try out a Perl project with Bitbucket Pipelines and deploy it to an EC2 running Ubuntu Xenial in AWS using the perl:5.26 Docker Hub image. This is also my first time using Docker as well. All goes well except when I try to start the PSGI I get the following:

    Can't load application from file "/var/www/production-1534349533/script/testproject": Can't load '/var/www/production-1534349533/script/../local/lib/perl5/x86_64-linux-gnu/auto/DBD/mysql/' for module DBD::mysql: cannot open shared object file: No such file or directory at /home/ubuntu/perl5/perlbrew/perls/perl-5.26.0/lib/5.26.0/x86_64-linux/ line 193. at /var/www/production-1534349533/local/lib/perl5/Mojo/mysql/ line 5.

    Some googling revealed a few of possible solutions that involved updating a symlink or purging, installing/reinstalling MariaDB to utilize their MySQL drivers, or install this library (which seems to have dropped off the map in the apt-get repos). Instead I deleted my local directory and run my dependency pipeline command on the EC2 instead:

    cpanm -nvL local --installdeps .

    Try firing up the PSGI again and now it works! So this leads me to building my own image for Pipelines to use base images of ubuntu:xenial and perl:5.26 to see this solves the problem and I still get the same error as mentioned above. Any thoughts on what would be different between my Xenial image and my Xenial EC2 instance when trying to run PSGI? I'm on the fence on whether this is a lack of understanding of the compiler and library requirements of DBD::mysql or of Docker.

    Dockerfile of my custom image mentioned above:
    FROM ubuntu:16.04 RUN apt-get update RUN apt-get -q -y install build-essential RUN apt-get -q -y install libxml2 RUN apt-get -q -y install libxml2-dev RUN apt-get -q -y install libmysqlclient-dev RUN apt-get -q -y install libexpat1-dev RUN apt-get -q -y install libssl-dev RUN apt-get -q -y install libnet-ssleay-perl RUN apt-get -q -y install git-core FROM perl:5.26 RUN cpanm -nv local::lib Dist::Zilla Dist::Zilla::Plugin::FakeRelease +Dist::Zilla::Plugin::Git::NextVersion Dist::Zilla::Plugin::PkgVersion + Dist::Zilla::Plugin::Prereqs Dist::Zilla::Plugin::PruneFiles Dist::Z +illa::PluginBundle::Filter Dist::Zilla::PluginBundle::Git
Paws S3 Download Object?
No replies — Read more | Post response
by taylorK
on Aug 15, 2018 at 12:37

    Hi Monks!

    Does anyone have any Paws/S3 experience? I have been working on using Paws to work with an S3 bucket and an SQS queue and need to download objects from the S3 bucket. My original method used curl but this doesn't work since you just get an unauthorized file back. I understand using GetObject then reading the body into a file but unfortunately the objects I am trying to download are all .gz "files" so reading the Object body does not really work out. I looked at the S3 module and it looks like there is a "get_key_filename" method that does what I am looking for, does anyone know if there is a similar method in Paws or do you have any other crafty ways of taking care of this?

    Thank you!

use warnings is complaining
5 direct replies — Read more / Contribute
by Anonymous Monk
on Aug 15, 2018 at 09:34

    Hi, i have written a script that reads some data and reports about it. Only when i use warnings i got the following complaint: 'Argument "" isn't numeric in printf at C:\Strawberry\codes\ line 11, <DATA> line 3.

    Here's my code:

    use strict; #use warnings; printf("%-11s %-27s %9s %11s","Date","Description","Incoming","Outgoin +g\n"); my $x = 0; my $tot; my $totex; while(<DATA>){ if($x==0){$x++;next;} my($date,$des,$inc,$exp)= unpack("A10 A27 A10 A*",$_); printf("%-10s %-27s %10.2f %10.2f\n",$date,$des,$inc,$exp); $tot += $inc; $totex += $exp; } printf("%38s %10.2f %10.2f","Totals",$tot,$totex); __DATA__ Date Description incoming outgoing 01/24/2001 Zed's Camel Emporium 100.00 1147.99 01/28/2001 Flea spray 24.99 01/29/2001 Camel rides to tourists 235.00 01/31/2001 avage1 125.00 01/20/2001 carpe diem 20.00 23.00

    How can i get rid of this, besides from turning warnings off?

New Meditations
The Future of Perl 5
5 direct replies — Read more / Contribute
by Laurent_R
on Aug 18, 2018 at 09:17
    Yesterday, for the last day of The Perl Conference (formerly known as YAPC) in Glasgow, Scotland, Curtis "Ovid" Poe delivered a very inspiring keynote address on the future of Perl.

    Ovid's idea was to imagine where Perl 5 would stand in ten years from now.

    These are some of the things Perl would have in 10 years in Ovid's vision:

    As an example for the first point above, consider the following subroutine:
    sub fibonacci { my $n = shift; return 1 if $n == 1 or $n == 0; return fibonacci($n-1) + fibonacci($n-2); }
    This subroutine should work correctly if the subroutine is called with a positive integer, but a number of things could go wrong: what will happen if no parameter is passed to the subroutine? or if the parameter is a negative integer, say -3? or if the parameter is a positive number but not an integer, e.g. 3.14? or if the parameter is not a number but a string? (Note that there would also be a problem with a large integer, but that's a different story.)

    For the above subroutine to be correct, you would need to add probably half a dozen boiler plate code lines to guard against invalid input, for example maybe something like this:

    sub fibonacci { die "..." unless defined $_[0]; my $n = shift; die "..." unless is_a_number($n); # the is_a_number function is to + be defined die "..." if $n < 0; die "..." if $n != int $n; # ... }

    With (non experimental) function signatures and proper typing, all you would need might just boil down to something like:

    sub fibonacci (PositiveInt $n) { return 1 if $n <= 1; return fibonacci($n-1) + fibonacci($n-2); }
    I think this would look quite cleaner.

    I hope the video of Ovid's talk will on-line soon.

    Comments are welcome.

RFC: A DSL for SQL (part 1)
No replies — Read more | Post response
by LanX
on Aug 17, 2018 at 18:10

    this is just a rough hack demonstrating a proof of concept for a "SQL::DSL".

    (and a general pattern for designing complex domain specific languages)

    I'm currently refactoring it out into clean sub-modules but wanted to show something already.

    The demonstrated ideas here are already sufficiently complex to discuss.

    (The implementation of named operators like BETWEEN and a JOIN mechanism are subject of threads to come)

    given this input:

    my ($user, $workhrs0, $geo0, $workhrs1, $geo1) = ('NWIGER', '20', 'ASIA', '50', 'EURO'); query { package Table; WHERE ( ANDS ( user == $user, ORS ( ANDS ( workhrs > $workhrs0 , geo == 20 ), ORS ( $workhrs1 < workhrs, geo == $geo1 ) ) ) ); };

    will the function query return an AST (abstract syntax tree) of nested "SQL::DSL" objects, which can be rendered into a target dialect like MySQL, Oracle ... (or maybe even SQL::Abstract or DBIx code ).

    Some basic ideas are:

    • SQL-Tables (here "Table") are realized as packages
    • These packages are limited to the scope of the surrounding code-block such that no namespace pollution occurs (a common problem with DSLs)
    • The Columns (here user) are realized as constants in this namespace returning "SQL::DSL::Column" objects
    • The operators are overloaded for Column objects and return "SQL::DSL::Operator" objects with nested Operand objects
    • Literal operands (like 20) are identified because they are readonly
    • Variable operands are identified and can be replaced by ? placeholders at render-time
    • actual values of the placeholders can be captured as variable references from the closure-vars and can be bound to the DBI->execute() later
    • "higher order" operations on nested operations just return the nested objects in a higher blessed container augmenting the AST
    • the rendering happens by walking the generated AST and calling a render methods on the encountered objects
    • the whole algorithm might look slow but we only need to run it once and memoize the result for later executions.
    Here the steps in the middle:

    === B::Deparse of the Code: { package Table; use warnings; use strict; use feature 'say'; WHERE(ANDS(user() == $user, ORS(ANDS(workhrs() > $workhrs0, geo() +== 20), ORS($workhrs1 < workhrs(), geo() == $geo1)))); } at d:/Users/lanx/vm_share/perl/Talks/DSL/2018_GPW/exp/SQL_abstract.p +l line 51. === Tidy of deparsed Perl-code: { package Table; use warnings; use strict; use feature 'say'; WHERE( ANDS( user() == $user, ORS( ANDS( workhrs() > $workhrs0, geo() == 20 ), ORS( $workhrs1 < workhrs(), geo() == $geo1 ) ) ) ); } === Abstract Syntax Tree (simplified): :'WHERE' is ::Clause :'ANDS' is ::Joiner :'=' is ::Infix :'user' is ::Column :'NWIGER' is ::Placeholder ["\n ", "NWIGER"] :'ORS' is ::Joiner :'ANDS' is ::Joiner :'>' is ::Infix :'workhrs' is ::Column :'20' is ::Placeholder ["\n ", 20] :'=' is ::Infix :'geo' is ::Column :'20' is SCALAR :'ORS' is ::Joiner :'<' is ::Infix :'50' is ::Placeholder ["\n ", 50] :'workhrs' is ::Column :'=' is ::Infix :'geo' is ::Column :'EURO' is ::Placeholder ["\n ", "EURO"] === Produced SQL: WHERE ( user = ? AND ( ( workhrs > ? AND geo = 20 ) OR ( ? < workhrs OR geo = ? ) ) ) at d:/Users/lanx/vm_share/perl/Talks/DSL/2018_GPW/exp/SQL_abstr line 59.

    here the code

    Any comments so far? :)

    Cheers Rolf
    (addicted to the Perl Programming Language :)
    Wikisyntax for the Monastery FootballPerl is like chess, only without the dice

New Cool Uses for Perl
Exploring Type::Tiny Part 5: match_on_type
No replies — Read more | Post response
by tobyink
on Aug 19, 2018 at 14:41

    Type::Tiny is probably best known as a way of having Moose-like type constraints in Moo, but it can be used for so much more. This is the fifth in a series of posts showing other things you can use Type::Tiny for. This article along with the earlier ones in the series can be found on my blog and in the Cool Uses for Perl section of PerlMonks.

    It's pretty common to do things like this:

    use Types::Standard qw( is_ArrayRef is_HashRef ); use Carp qw( croak ); sub process_data { my ($self, $data) = @_; if (is_ArrayRef($data)) { $self->_process_value($_) for @$data; } elsif (is_HashRef($data)) { $self->_process_value($_) for values %$data; } else { croak "Could not grok data"; } }

    Type::Utils provides a perhaps slightly neater way to do this:

    use Types::Standard qw( ArrayRef HashRef Any ); use Type::Utils qw( match_on_type ); use Carp qw( croak ); sub process_data { my ($self, $data) = @_; match_on_type $data, ArrayRef, sub { $self->_process_value($_) for @$data }, HashRef, sub { $self->_process_value($_) for values %$data } +, Any, sub { croak "Could not grok data" }; }

    The match_on_type function takes a value and a set of type–coderef pairs, dispatching to the first coderef where the value matches the type constraint. This function is stolen from Moose::Util::TypeConstraints.

    You can get an order of magnitude faster though by doing something similar to what Type::Params does — compiling the match once, then calling it as needed.

    Let's look at a naïve (and wrong) way to do this first and examine the problems:

    use Types::Standard qw( ArrayRef HashRef Any ); use Type::Utils qw( compile_match_on_type ); use Carp qw( croak ); sub process_data { my ($self, $data) = @_; state $matcher = compile_match_on_type ArrayRef, sub { $self->_process_value($_) for @$data }, HashRef, sub { $self->_process_value($_) for values %$data } +, Any, sub { croak "Could not grok data" }; $matcher->($data); }

    The big problem here is that the first time process_data is called, the matcher will close over $self and $data. Subsequent calls to $matcher will reuse the same closed over variables. Oops.

    The simplest way of solving this is to take advantage of the fact that a compiled matcher (unlike match_on_type) can take a list of arguments, not just one. Only the first argument is used for the type matching, but all arguments are passed to the coderefs on dispatch.

    use Types::Standard qw( ArrayRef HashRef Any ); use Type::Utils qw( compile_match_on_type ); use Carp qw( croak ); sub process_data { my ($self, $data) = @_; state $matcher = compile_match_on_type ArrayRef, sub { my ($d, $s) = @_; $s->_process_value($_) for +@$d }, HashRef, sub { my ($d, $s) = @_; $s->_process_value($_) for +values %$d }, Any, sub { croak "Could not grok data" }; $matcher->($data, $self); }

    Like many Type::Tiny interfaces that expect coderefs, compile_match_on_type also accepts strings of Perl code as an alternative, and is able to optimize things better if those are supplied:

    use Types::Standard qw( ArrayRef HashRef Any ); use Type::Utils qw( compile_match_on_type ); use Carp qw(); sub process_data { my ($self, $data) = @_; state $matcher = compile_match_on_type ArrayRef, q{ my ($d, $s) = @_; $s->_process_value($_) for @$d + }, HashRef, q{ my ($d, $s) = @_; $s->_process_value($_) for val +ues %$d }, Any, q{ Carp::croak("Could not grok data") }; $matcher->($data, $self); }

    The coderefs compiled by compile_match_on_type should be very efficient. The technique is very similar to how Type::Coercion compiles coercions.

Proxying (almost) all methods in a class for mass memoization
1 direct reply — Read more / Contribute
by Tommy
on Aug 15, 2018 at 20:08

    If this could be done in a 'better' way, I'd enjoy hearing it. Criticisms welcome.

    Recently needed to basically cache the output of almost every class method in one of my modules that is part of a web app. Every method runs a different database query and encodes the results to JSON. The results change daily, so upstream of the module implementation there is logic that enforces a 12 hour TTL for anything any method returns. In the interim time though, there's no reason for the app to run these database queries at all when it already did the work. Reading about possible approaches to the problem on stack overflow yesterday I saw that use of autoload was discouraged, so this is what I came up with and as far as I can tell, after running two days in DEV, it appears to have no issues. I'm actually quite pleased, because this approach allowed me to be 'clever' without an implementation that is unmaintainable and unintelligible by others... Gist here

    use strict; use warnings; package My::Class::Proxy; # Drop-in replacement for 'Some Class' # Proxies all public method calls to Some::Class in order to provide s +mart # caching and memoization, e.g.- avoiding expensive DB queries when no +t required use 5.020; use Moose; extends 'Some::Class'; use Moose::Util qw(); my $meta = Moose::Util::find_meta( 'Some::Class' ); my @nocache = qw( new meta DESTROY AUTOLOAD ); state $outputs = {}; for my $method ( $meta->get_method_list ) { # don't memo-ize blaclisted or private methods next if ( grep { $_ eq $method } @nocache or $method =~ /^_/ ); around $method => sub { my ( $orig, $self, $refresh, @args ) = @_; $outputs = {} if !!$refresh; @args = map { $_ // '' } @args; my $call_key = join '', $orig, @args; return $outputs->{ $call_key } if defined $outputs->{ $call_key +}; $outputs->{ $call_key } = $self->$orig( @args ); return $outputs->{ $call_key }; }; } # Moose-specific optimization __PACKAGE__->meta->make_immutable(); 1;

    A mistake can be valuable or costly, depending on how faithfully you pursue correction
Log In?

What's my password?
Create A New User
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others examining the Monastery: (11)
As of 2018-08-21 17:50 GMT
Find Nodes?
    Voting Booth?
    Asked to put a square peg in a round hole, I would:

    Results (202 votes). Check out past polls.