Beefy Boxes and Bandwidth Generously Provided by pair Networks
Do you know where your variables are?
 
PerlMonks  

Cool Uses for Perl

( #1044=superdoc: print w/replies, xml ) Need Help??

This section is the place to post your general code offerings -- everything from one-liners to full-blown frameworks and apps.

CUFP's
perlpdf for perldoc as PDF
1 direct reply — Read more / Contribute
by Anonymous Monk
on Jun 02, 2018 at 05:40
    Type perlpdf instead of perldoc!

    This bash function uses pod2pdf to produce a very nice san-serif font PDF with title and footer and opens it with the "open" command (for OSX, adjust as needed). The tempfile is necessary for the PDF reader (Preview) to have a filename should one choose to save the file as, for example: "perldoc -q array.pdf". pod2pdf seems to call PDF::API2 about 10,000 times per page so it's slow on large documents

    function perlpdf() { P=perldoc; X="$P $@"; $P -uT "$@" | pod2pdf --title="$X" --footer-text="$X" > "/tmp/$X.pdf"; open "/tmp/$X.pdf"; }
    Type perlpdx instead of perlpdf!

    This one uses ghostscript ps2pdf via man via pod2man to produce a plainer looking serif font PDF with a more generic title and footer (perl version/date/etc) and a filename. It seems more complicated but the process is extremely fast.

    function perlpdx(){ P=perldoc; $P -uT "$@" | pod2man > "/tmp/$P $*.1"; man -t "/tmp/$P $*.1" | ps2pdf - "/tmp/$P $*.pdf"; open "/tmp/$P $*.pdf"; }
Safely read/write a file simultaneously
1 direct reply — Read more / Contribute
by golux
on May 16, 2018 at 20:40
    I'm working on a Client/Server project where a "progress file" gets updated (potentially quite often) and read from a separate (CGI) process. I was pretty sure having the writer reopening the progress file could cause the reader to have occasional problems, and also fairly sure that writing to a tempfile instead (and then moving the tempfile over the progress file) would be much safer (ie. atomic).

    But why Google it when you can write code to test it instead? ;-)

    Here's the result, which indicates I was correct on both counts, and happily the latter seems to be atomic enough that an error never occurs. Set or clear the value of $unsafe to try the different algorithms.

    #!/usr/bin/perl ############### ## Libraries ## ############### use strict; use warnings; use File::Copy; use Function::Parameters; use IO::File; ################## ## User-defined ## ################## my $file = 'file.txt'; my $rdelay = 0.03; # Read delay: 3/100th of a second my $wdelay = 0.01; # Write delay: 1/100th of a second my $unsafe = 1; # Set to zero to call the "safe" write algorit +hm ################## ## Main Program ## ################## $| = 1; if (fork) { writer($unsafe); } else { reader(); } ################# ## Subroutines ## ################# # # Writes to a file many times per second. # fun writer($unsafe) { my $count = 0; while (1) { if ($unsafe) { writefile1($file, $count++); } else { writefile2($file, $count++); } # Sleep for 1/100th of a second select(undef, undef, undef, $wdelay); } } # # Reads from the file, displaying number of total errors # (each time the $count was undefined). # fun reader() { sleep 1; # Give the writer time to create the file initiall +y my $nerrs = 0; # How many total errors did we get? while (1) { my $count = readfile($file); if ($count) { printf "%8d, ", $count; } else { printf "\nTotal errors = %d\n", ++$nerrs; sleep 1; } } select(undef, undef, undef, $rdelay); } # # Algorithm 1 # # Writes the $value directly to the file # This turns out to be quite prone to error when the file is read. # fun writefile1($file, $value) { my $fh = IO::File->new; open($fh, '>', $file) or die "Can't write '$file' ($!)\n"; print $fh "$value\n"; close($fh); } # # Algorithm 2 # # Writes the $value to a temp file, then moves the tempfile over the # actual destination. This turns out to be quite safe for reading. # fun writefile2($file, $value) { my $fh = IO::File->new; my $tmp = 'tmp.txt'; open($fh, '>', $tmp) or die "Can't write '$file' ($!)\n"; print $fh "$value\n"; close($fh); move($tmp, $file); } # # Reads the $value from the $file # fun readfile($file) { my $fh = IO::File->new; open($fh, '<', $file) or die "Can't read '$file' ($!)\n"; my $value = <$fh>; defined($value) or return 0; chomp($value); close($fh); return $value; }
    say  substr+lc crypt(qw $i3 SI$),4,5
Conways Game of Life in PDL
1 direct reply — Read more / Contribute
by mxb
on May 16, 2018 at 11:31

    Edit: Apparently this is in the PDL Documentation, as an example. Whoops! Still, it was a good learning exercise :)

    Rather than a ported numpy tutorial, this is a self developed implementation of Conways Game of Life written in Perl/PDL. Hopefully people find this interesting as I feel it shows how concise PDL code can be.

    The code is fairly straightforward. There is a single function conway() which accepts a single argument of the game arena. This is a two dimensional PDL matrix. Alive cells are represented by a one, dead ones by zero. The conway() function sums the value of each cell along with value of its nine neighbours into a temporary variable $tmp. It then applies the rules of the game, which are:

    • Any live cell with fewer than two live neighbors dies, as if caused by under population.
    • Any live cell with two or three live neighbors lives on to the next generation.
    • Any live cell with more than three live neighbors dies, as if by overpopulation.
    • Any dead cell with exactly three live neighbors becomes a live cell, as if by reproduction.

    This is implemented as an elementwise or and an elementwise and.

    The main loop of the game is in the body of the code and simply displays the generation and the game arena and awaits input

    The game arena is initialised with a 'glider', but feel free to experiment. As PDL wraps around by default, the surface is that of a torus.

    Enter a blank line for the next generation, anything else to exit

    Enjoy

    #!/usr/bin/env perl use strict; use warnings; use 5.016; use PDL; sub conway { my $pdl = shift; die "Not 2D piddle" unless $pdl->ndims == 2; # Add up all values: my $tmp = $pdl + # original $pdl->transpose->rotate(-1)->transpose + # north $pdl->transpose->rotate(-1)->transpose->rotate(-1) + # northeast $pdl->rotate(-1) + # east $pdl->transpose->rotate(1)->transpose->rotate(-1) + # southeast $pdl->transpose->rotate(1)->transpose + # south $pdl->transpose->rotate(1)->transpose->rotate(1) + # southwest $pdl->rotate(1) + # west $pdl->transpose->rotate(-1)->transpose->rotate(1); # northwest # Cell is alive if it's either: return ( $tmp == 4 & $pdl == 1 ) | # Alive +3 neighbors $tmp == 3; # Alive +2 neighbors or dead +3 neighbors } my $arena = pdl(byte, [ [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], ] ); my $gen = 0; while (1) { print "Generation: $gen (press enter for next)\n"; print $arena; $arena = conway($arena); $gen++; exit if <STDIN> ne "\n"; }
Basic Neural Network in PDL
1 direct reply — Read more / Contribute
by mxb
on May 15, 2018 at 07:37

    As part of my ongoing quest to port tutorials from Python/numpy to Perl/PDL please graciously accept the following contribution to the Monastery.

    This is the Perl/PDL port of A Neural Network in 11 Lines of Python. While I've added some documentation, please reference the original blog post for full details.

    #!/usr/bin/env perl use strict; use warnings; use 5.016; use PDL; ###################################################################### # This example is ported from the tutorial at # https://iamtrask.github.io/2015/07/12/basic-python-network/ ###################################################################### # # In this example, we are training a neural network of two layers # (one set of weights). # It has the following variables: # $X - input neurons # $y - desired output values # $syn0 - single layer of weights # $l1 - output neurons # # This is our 'non-linear' function. It accepts two arguments. # The first argument is a piddle of values, and the second argument # is a flag. # # If the flag is unset, the function returns the elementwise Sigmoid # Function (https://en.wikipedia.org/wiki/Sigmoid_function). # # If the flag is set, the function returns the elementwise derivative # of the Sigmoid Function. sub nonlin { my ( $x, $deriv ) = @_; return $x * ( 1 - $x ) if defined $deriv; return 1 / ( 1 + exp( -$x ) ); } # $X is are our input values. It contains four examples of three # inputs. It is the following matrix: # # [ # [0 0 1] # [0 1 1] # [1 0 1] # [1 1 1] # ] my $X = pdl( [ [ 0, 0, 1 ], [ 0, 1, 1 ], [ 1, 0, 1 ], [ 1, 1, 1 ] ] ); # $y is the output vector. It is the following desired outputs for # the four input vectors above: # [0 0 1 1] my $y = pdl( [ 0, 0, 1, 1 ] )->transpose; # $syn0 is the first layer of weights, connecting the input values # ($X) to our first layer ($l1). It is initialised to random values # between -1 and 1. my $syn0 = ( ( 2 * random( 3, 1 ) ) - 1 )->transpose; # $l1 is the second (output) layer: my $l1; # This is the training loop. It performs 10000 training interations. for ( 0 .. 10000 ) { # Predict the outputs for all four examples (full batch training) # This is performed by applying the non-linear function # elementwise over the dot product of our input examples matrix # ($X) and our weights between layers 0 (input) and 1 (output) # ($syn0): $l1 = nonlin( $X x $syn0 ); # Calculate the error by comparing calculated values ($l1) to # known output values ($y) my $l1_error = $y - $l1; # Calculate the 'error weighted derivative'. This is the # elementwise product of the errors and the derivative of the # non-linear function across the outputs my $l1_delta = $l1_error * nonlin( $l1, 1 ); # Update the weights between the layers $syn0 += ( $X->transpose x $l1_delta ); } # Display output say "Expected output:", $y; say "Output After Training:", $l1;

    Running it on my machine takes approximately 1.5 seconds and gives output similar to:

    % perl nn_tutorial.pl Expected output: [ [0] [0] [1] [1] ] Output After Training: [ [0.0096660515] [0.0078649669] [ 0.99358927] [ 0.99211856] ]
Locate Survey Markers (US-Only, uses USGS WebService)
No replies — Read more | Post response
by roboticus
on May 04, 2018 at 18:38

    Hello, all--

    I had a friend who wanted to find some survey markers near his house, so I looked around and found a webservice that would locate some survey markers in an area. Since I coded it up, I thought I'd publish it here, in case there's anyone else who might like to try it. It's rough (as it's a one-off), but should be easy enough to modify.

    As is often the case, all the heavy lifting is done by some handy CPAN modules (JSON, HTTP::Request, LWP::UserAgent and Math::Trig). I was especially pleased to find Math::Trig--I was trying to derive it myself and needed arc-cosine. When I found Math::Trig had acos and looked over the docs, I found that it already had all the great-circle math as well!

    Anyway, I hope someone finds it useful...

    #!env perl # # websvc_usgs_fetch_bounding_box.pl <LAT> <LON> <dist> # # Use the USGS "Bounding Box Service" to find survey markes within the # rough rectangle whose sides are <dist> miles from the specified lati +tude # and longitude. # # 20180504 original version # use strict; use warnings; use Data::Dump 'pp'; use HTTP::Request; use JSON; use LWP::UserAgent; use Math::Trig qw( :great_circle deg2rad ); my $LAT = shift; my $LON = shift; my $center_dec = [ $LON, $LAT ]; my $squaradius_mi = shift or die <<EOMSG; Expected: perl websvc_usgs_fetch_bounding_box.pl LAT LON RAD LAT - latitude like 38.1234, LON - longitude like -78.1234, RAD - radius in miles (actually roughly a rectangle rather than ci +rcle) EOMSG my $Re_mi = 3958.8; # radius of earth in miles # Figure how approximately how long a degree is in both the longitudin +al and # latitudinal directions. my $mi_per_degree = miles_per_degree([ NESW($LON, $LAT) ]); # Now find the (min/max) * (lat,lon) for the bounding rectangle to sea +rch # for survey markers my $deg_per_mi_lat = 1 / $mi_per_degree->[1]; my $deg_per_mi_lon = 1 / $mi_per_degree->[0]; my $min_lat = $center_dec->[1] - $squaradius_mi*$deg_per_mi_lat; my $max_lat = $center_dec->[1] + $squaradius_mi*$deg_per_mi_lat; my $min_lon = $center_dec->[0] - $squaradius_mi*$deg_per_mi_lat; my $max_lon = $center_dec->[0] + $squaradius_mi*$deg_per_mi_lat; # We'll use the DDMMSS format for the lat/lon $min_lat = dec_to_DDMMSS($min_lat, "N", "S"); $max_lat = dec_to_DDMMSS($max_lat, "N", "S"); $min_lon = dec_to_DDMMSS($min_lon, "E", "W"); $max_lon = dec_to_DDMMSS($max_lon, "E", "W"); # Now fetch the data, and print the results my $URL = "http://geodesy.noaa.gov/api/nde/bounds?" ."minlat=$min_lat&maxlat=$max_lat" ."&minlon=$min_lon&maxlon=$max_lon"; my $request = HTTP::Request->new(GET=>$URL, [ 'Content-Type'=>'applica +tion/json; charset=UTF-8' ]); my $ua = LWP::UserAgent->new; my $response = $ua->request($request); if (! exists $response->{_content}) { # crappy error detection/handling but it meets my current needs print pp($response), "\n"; print "***** expected response->{content}!!!!!!!!\n"; } my $r = decode_json($response->{_content}); print "Found ", scalar(@$r), " markers within $squaradius_mi miles cen +tered around ", pp($center_dec), "\n"; print <<EOHDR; LATITUDE LONGITUDE NAME PID ---------- ---------- -------------------- ---------- EOHDR for my $hr (@$r) { printf "%-10s %-10s %-20s %-10s\n", $hr->{lat}, $hr->{lon}, $hr->{ +name}, $hr->{pid}; } sub dec_to_DDMMSS { my ($dec, $dir_pos, $dir_neg) = @_; # Unfortunately, they use 2 sig digs for N/S and 3 for E/W my $fmt = $dir_pos eq 'N' ? "%s%02d%02d%02d.%03d" : "%s%03d%02d%02 +d.%03d"; my $dir = $dir_pos; if ($dec < 0) { $dec = -$dec; $dir = $dir_neg; } my ($deg, $min, $sec, $sfrac) = (int($dec), 0, 0, 0); $dec = 60 * ($dec - $deg); $min = int($dec); $dec = 60 * ($dec - $min); $sec = int($dec); $sfrac = int(1000*($dec-$sec)); return sprintf $fmt, $dir, $deg, $min, $sec, $sfrac; } sub dms_to_DDMMSS { my $dms = shift; my ($deg, $dir, $min, $sec, $sfrac); if ($dms =~ /^\s*(\d+)\s*([NEWS])\s*(\d+)\s*'\s*([\d\.]+)\s*"\s*$/ +) { ($deg, $dir, $min, $sec, $sfrac) = ($1, $2, $3, $4, 0); if ($sec =~ /(\d+)\.(\d+)/) { ($sec,$sfrac) = ($1,$2); } } else { die "Unexpected format <$dms>!"; } # Build the return value: For N/S use <xDDMMSS.s*>, for E/W use <x +DDDMMSS.s*> if ($dir eq "N" or $dir eq "S") { return sprintf "%s%02d%02d%02d.%s", $dir, $deg, $min, $sec, $s +frac; } else { return sprintf "%s%03d%02d%02d.%s", $dir, $deg, $min, $sec, $s +frac; } } sub NESW { deg2rad($_[0]), deg2rad(90-$_[1]) } sub dms_to_dec { my $dms = shift; my ($deg, $dir, $min, $sec, $sfrac); if ($dms =~ /^\s*(\d+)\s*([NEWS])\s*(\d+)\s*'\s*([\d\.]+)\s*"\s*$/ +) { ($deg, $dir, $min, $sec, $sfrac) = ($1, $2, $3, $4, 0); if ($sec =~ /(\d+)\.(\d+)/) { ($sec,$sfrac) = ($1,$2); } } else { die "Unexpected format <$dms>!"; } $dir = ($dir eq 'N' or $dir eq 'E') ? 1 : -1; return $dir * ($deg + $min/60.0 + (0 + ("$sec.$sfrac"))/3600.0); } # compute number of miles per degree at the specified lat/lon sub miles_per_degree { my $news = shift; my $news_1lat = [ @$news ]; $news_1lat->[1] += deg2rad(1.0); my $news_1lon = [ @$news ]; $news_1lon->[0] += deg2rad(1.0); my $dLat_km = great_circle_distance(@$news, @$news_1lat, $Re_mi); my $dLon_km = great_circle_distance(@$news, @$news_1lon, $Re_mi); return [ $dLat_km, $dLon_km ]; }

    A sample run gives me:

    $ perl websvc_usgs_fetch_bounding_box.pl 34.1234 -78.1234 1.5 Found 11 markers within 1.5 miles centered around [-78.1234, 34.1234] LATITUDE LONGITUDE NAME PID ---------- ---------- -------------------- ---------- 34.14829 -78.09827 WINNABOW RM 4 EB0192 34.14832 -78.09805 WINNABOW EB0189 34.14856 -78.09788 WINNABOW RM 3 EB0191 34.09944 -78.12361 P 117 EB0198 34.11095 -78.11490 FLOWERS EB2124 34.11178 -78.11387 Q 117 EB0197 34.11812 -78.10776 BECK EB2125 34.12278 -78.10611 R 117 EB0195 34.12708 -78.10360 CAMPBELL EB2108 34.14593 -78.10423 WINNABOW AZ MK 2 EB1389 34.14722 -78.11694 N 235 EB0300

    ...roboticus

    When your only tool is a hammer, all problems look like your thumb.

Stone Jam -- 21 challenging puzzles with perl Tk
3 direct replies — Read more / Contribute
by Discipulus
on May 03, 2018 at 04:00
    Hello nuns and monks!

    April was the month dedicated to Perl/Tk here at the monastery (grin grin..) and during this time I developed another funny Tk puzzle. I have some unanswered question I put here even if the game is perfectly functioning.

    I took the inspiration from a puzzle game of my daughter and suddenly I had the idea to translete in perl.

    The program presents 21 different puzzles where you have to move a red stone up to border of the tablebaord, while other stones have to be shifted to make the path free.

    The challenge, for me, was to reproduce some realistic moves for canvas: I was for some method to produce some brick-like piece unable to collide themselves. I had no chance till the moment I focused on free tiles, recalculating possible moves for all stones everytime one get moved. CPU cycles are so cheap nowadays..

    • there is some better way to achieve the above? For my own sanity I abstracted the board avoiding direct coordinates calculations.
    About the graphic: I planned to use some advanced tecnique for this game like texture applied to stones or tile (graphic ones) applied to them but I had absolutely no luck in this and I ended with la Mondrian color scheme (with, as always, the high contrast option: many many people has problems with colors..)
    • what happened to the -tile option for canvas? How can I fill canvas objects (rectangles, ovals..) with Photos or advanced grafic?

    About the code: I realized about free space too late, also the moves caclulation was modified in fieri so the resulting code can be probably shortened a lot. Anyway the code is complete and runs fine and some of the puzzles is really challenging.

    Have fun!

    PS added to my github repos: Stone-Jam I'd like issues and eventual reports to be directed there.

    L*

    There are no rules, there are no thumbs..
    Reinvent the wheel, then learn The Wheel; may be one day you reinvent one of THE WHEELS.
Repeatedly edit a file hacking PPI::Cache
No replies — Read more | Post response
by Discipulus
on Apr 13, 2018 at 07:34
    Hello monks and nuns,

    this is a quick hack on PPI::Cache and PPI itself. The program looks for the file named last_hex_id_file.sto which contains the last hex generated for the perl document. If the file is not found (as normal the first time you run the program) it ask for the path of a perl document to parse: then generates the cache and store the hex in the above file.

    When a new file or some cache content is loaded it ask for a PPI class to iterate over: each element is printed out and you are asked if you want to modify it.

    At the end of the cylce the new document is put in the cache and you are asked for an eventual output file. Next time you run the program the newer version is automatically loaded from the cache: because of this run in a new folder for each perl document you want to modify.

    use strict; use warnings; use PPI; use PPI::Cache; use Term::ReadLine; use Storable qw(nstore retrieve); my $term = Term::ReadLine->new('PPI cache hack'); my $last_hex_id_file = 'last_hex_id_file.sto'; my $perl_doc; my $cache; # not found the last_hex_id_file.sto file: ask for a new perl document + to parse unless (-e $last_hex_id_file){ print "cache file $last_hex_id_file not found.\n". "Insert the full path of a new file to edit and press ente +r (or CTRL-C to terminate)\n"; my $path = $term->readline('FILE PATH:'); die "Some problem with [$path]! " unless -e -r -f -s $path; my $doc = PPI::Document->new($path) or die "Unable to load $path v +ia PPI! "; $cache = PPI::Cache->new( path => './',readonly => 0); # store the original in the cache $cache->store_document($doc) or die "Unable to store into the cach +e!"; # get a copy to work with from the cache $perl_doc = $cache->get_document($doc->hex_id); print "loading from cache ok\n" if ref $perl_doc eq 'PPI::Document +'; #store_hex($doc->hex_id); nstore (\$doc->hex_id, $last_hex_id_file); } # last_hex_id_file.sto is here: load from it my $last_hex = retrieve($last_hex_id_file); print "'last_hex_id_file.sto' succesfully read: using $$last_hex\n"; $cache = PPI::Cache->new( path => './',readonly => 0) unless ref $c +ache eq 'PPI::Cache'; $perl_doc = $cache->get_document( $$last_hex ); print "Which PPI class do you want to edit?\n"; my $class = $term->readline('PPI CLASS:'); print "\n\nEach element of the type $class will be proposed for edit ( +the content).\n". "insert your new input terminating it with CTRL-Z on a empty l +ine.\n". "use a bare ENTER to skip the current element\n\n"; foreach my $it ( @{$perl_doc->find($class)} ) { print "STATEMENT: ",$it->statement,"\n", "CONTENT: ",$it->content,"\n\n"; my @in; while ( my $line = $term->readline('EDIT:') ){ push @in,$line; } if (@in){ $it->set_content(join "\n",@in); } } # store in the $cache->store_document($perl_doc); print "storing cache hex_id: ",$perl_doc->hex_id," in $last_hex_id_fil +e\n"; nstore (\$perl_doc->hex_id, $last_hex_id_file); # ask for an eventual output file print "Enter a filename if you want to save the current version (or EN +TER to skip)\n"; my $out = $term->readline('OUTPUT FILE:'); $perl_doc->save( $out ) if $out;
    L*

    PS if you pass PPI::Token::Quote in the above program you can use it to translate a program into another language with easy.

    There are no rules, there are no thumbs..
    Reinvent the wheel, then learn The Wheel; may be one day you reinvent one of THE WHEELS.
perl6 Array of hashes AoH
No replies — Read more | Post response
by teun-arno
on Mar 19, 2018 at 15:26

    Perl5 has array of hashes ( AoH ) this can be done in perl6 also :

    D:\perl6.scripts\myown>perl6 -v This is Rakudo Star version 2018.01 built on MoarVM version 2018.01 implementing Perl 6.c.

    I am on windows10

    use v6; # create 3 hashes, showing mixed inits that can be used in perl6 : my %hsh0 = ( "Name" => "George H. W. Bush", "Function" => <president o +f USA> , 'Time' => <2001-2009>); my %hsh1 = ( "Name" => "Bill Clinton", "Function" => <president of USA +> , 'Time' => <1993-2001>) ; my %hsh2 = ( "Name" , "Barack Obama", "Function" , <President of USA +> , 'Time' , <2009-2017> ) ; # Subscript Adverbs : :exists :k :v :p :delete #if ( %hsh1<Name>:exists) { # Use this when keys have No Spaces !! my $key = 'Name' ; if ( %hsh1{$key}:k) { # Can I use %hsh1{Name}:exists : Yes works! say "Found Name element in %hsh1"; } # there is an other use for <> construct : , but {} also works!! say %hsh1{}:v.perl; #Show all values say %hsh1.pairs.perl; # push the several hashes onto the array : @arr my @arr ; # Please notice the : after the push @arr.push: { %hsh0 } ; @arr.push: { %hsh1 } ; @arr.push: { %hsh2 } ; # show some entries. using serveral formats that perl6 has : say '@arr.[0].{"Name"} = ' ~ @arr.[0].{'Name'}; say '@arr.[1].<Name> = ' ~ @arr.[1].<Name> ; say @arr.end.fmt('%+4d') ; # How many items ( hashes ) are in the ar +ray. # dump @arr dd @arr; # try looping over the @arr , and detect the keys stored in the hash. +It's more simple than I thought ( after some experimenting ) for 0 .. @arr.end -> $idx { say "@arr idx : $idx"; my %x = @arr[$idx]; for %x.kv -> $key, $value { printf "%10.10s : %-20.20s\n" , $key, $value; # please notice +printf NOT using () ; } }

    Output

    D:\perl6.scripts\myown>perl6 array_hash_3.p6 Found Name element in %hsh1 ($("president", "of", "USA"), "1993-2001", "Bill Clinton") (:Function($("president", "of", "USA")), :Time("1993-2001"), :Name("Bi +ll Clinton")).Seq @arr.[0].{"Name"} = George H. W. Bush @arr.[1].<Name> = Bill Clinton +2 Array @arr = [{:Function($("president", "of", "USA")), :Name("George H +. W. Bush"), :Time("2001-2009")}, {:Function($("president", "of", "US +A")), :Name("Bill Clinton"), :Time("1993-2001")}, {:Function($("Presi +dent", "of", "USA")), :Name("Barack Obama"), :Time("2009-2017")}] @arr idx : 0 Function : president of USA Time : 2001-2009 Name : George H. W. Bush @arr idx : 1 Function : president of USA Time : 1993-2001 Name : Bill Clinton @arr idx : 2 Function : President of USA Time : 2009-2017 Name : Barack Obama

    Could not find any usefull examples on this subject : This is what I came up with.
    Hope it's usefull for somebody else.

     have fun with Perl6
perl6 matrix arrayof arrays
2 direct replies — Read more / Contribute
by teun-arno
on Mar 19, 2018 at 15:03

    Started with perl6 shortly.. perl5 has AoA ... wanted to know if something can be done in perl6...
    It seems that it can be done using perl6 :

    C:\WINDOWS\system32>perl6 -v This is Rakudo Star version 2018.01 built on MoarVM version 2018.01 implementing Perl 6.c.

    So here is the code which works under windows10

    use v6; my @arr = [ [ 1.1,2.2,3.3,4.4,5.5 ], [ 10,20,30,40,50 ], [ 100,200,300,400,500 ], [ 1000,2000,3000,4000,5000 ], ]; dd @arr; # dump the matrix loop ( my $row=0; $row <= @arr.end; $row++) { #say "Idx : $row"; loop (my $col=0 ; $col <= @arr[$row].end ; $col++ ) { print "@arr[$row][$col].fmt("%7.1f")\t"; } print "\n"; } my $aant_cols = ( @arr[0].end ) ; # It's a matrix : so Just take one +row to find out the number of columns # cannot use @arr[0].elems : gives +an error print "=======\t" x $aant_cols + 1 , "\n"; loop ( my $col=0 ; $col <= $aant_cols ; $col++ ) { printf "%7.1f" , [+] @arr[*;$col] ; # calculate the total for each + column print "\t" } say "";

    The above creates the following result

    Array @arr = [1.1, 2.2, 3.3, 4.4, 5.5, 10, 20, 30, 40, 50, 100, 200, 300, 400, 500, 1000, 2000, 3000, 4000, 5000]
        1.1     2.2     3.3     4.4     5.5
       10.0    20.0    30.0    40.0    50.0
      100.0   200.0   300.0   400.0   500.0
     1000.0  2000.0  3000.0  4000.0  5000.0
    ======= ======= ======= ======= =======
     1111.1  2222.2  3333.3  4444.4  5555.5

    Please notice : All columns are totalled ( =sum in excel ) .

    Could not find any usefull examples in the perl6 documentation. So I learned it myself.
    Hope it's of use for sombody else

    Have fun with perl6
My first cpan module - App::ForKids::LogicalPuzzleGenerator
3 direct replies — Read more / Contribute
by pawel.biernacki
on Feb 23, 2018 at 14:06

    Hi! I would like to introduce my module - my first contribution to www.cpan.org. It is generating a logical puzzle. Get it from https://metacpan.org/release/App-ForKids-LogicalPuzzleGenerator. You can use it as follows:

    use App::ForKids::LogicalPuzzleGenerator; my $x = App::ForKids::LogicalPuzzleGenerator->new(range=>3, amount_of_facts_per_session => 4); print $$x{intro_story}; print $$x{story}; print $$x{solution_story};

    It is heavily using AI::Prolog. An example of such puzzle is below:

    John,Patrick and James live here. Each has a different favourite fruit (pinapples,apples,pears). Each has a different profession (fisherman,blacksmith,witcher).

    - My name is John. The one who likes apples is not a blacksmith. Patrick is not a witcher. James does not like pinapples. James is not a fisherman.

    - My name is James. John does not like pears. Patrick does not like apples. I don't like apples. The one who likes apples is not a fisherman.

    John likes apples. John is a witcher. Patrick likes pinapples. Patrick is a fisherman. James likes pears. James is a blacksmith.

    Pawel Biernacki
Clean Up Empty Directories
4 direct replies — Read more / Contribute
by GotToBTru
on Feb 16, 2018 at 16:50
    The code somebody else wrote cleans out old files, but leaves the directories behind. This cleans up the directories.

    #!/usr/bin/perl use strict; use warnings; chomp(my @list = `du -kh /mnt/edi/si51/documents`); my $dltd = 0; foreach my $line (@list) { my ($size,$path) = split /\t/, $line; $size =~ s/\D//g; if ($size == 0) { rmdir $path && $dltd++ } } printf "%d directories deleted.\n",$dltd;

    UPDATE: There are several things that were in an earlier version of this script that didn't make the second cut, but only because I got lazy. My original got deleted somehow, and I had foolishly not kept a copy, so I wrote the above quickly.

    The directory structure is documents/4digityear/abbreviatedcardinalmonth/2digitday/hour/minute. At first I restricted deletions to directories above some number of days old, but rmdir updates the directory time information, meaning a directory that was now empty because all its empty constituent directories were gone looked like it was brand new. This made it useless to run consecutively. I came up with a calculation that used the directory tree to come up with the age, and that worked. I just didn't bother with it when I rewrote the script this time. Some of the alternate solutions don't have that limitation.

    But God demonstrates His own love toward us, in that while we were yet sinners, Christ died for us. Romans 5:8 (NASB)

MCE gather and relay demonstrations
No replies — Read more | Post response
by marioroy
on Feb 13, 2018 at 00:32

    Fellow Monks,

    I received a request from John Martel to process a large flat file and expand each record to many records based on splitting out items in field 4 delimited by semicolons. Each row in the output is given a unique ID starting with one while preserving output order.

    Thank you, John. This is a great use-case for MCE::Relay (2nd example).

    Input File -- Possibly larger than 500 GiB in size

    foo|field2|field3|item1;item2;item3;item4;itemN|field5|field6|field7 bar|field2|field3|item1;item2;item3;item4;itemN|field5|field6|field7 baz|field2|field3|item1;item2;item3;item4;itemN|field5|field6|field7 ...

    Output File

    000000000000001|item1|foo|field2|field3|field5|field6|field7 000000000000002|item2|foo|field2|field3|field5|field6|field7 000000000000003|item3|foo|field2|field3|field5|field6|field7 000000000000004|item4|foo|field2|field3|field5|field6|field7 000000000000005|itemN|foo|field2|field3|field5|field6|field7 000000000000006|item1|bar|field2|field3|field5|field6|field7 000000000000007|item2|bar|field2|field3|field5|field6|field7 000000000000008|item3|bar|field2|field3|field5|field6|field7 000000000000009|item4|bar|field2|field3|field5|field6|field7 000000000000010|itemN|bar|field2|field3|field5|field6|field7 000000000000011|item1|baz|field2|field3|field5|field6|field7 000000000000012|item2|baz|field2|field3|field5|field6|field7 000000000000013|item3|baz|field2|field3|field5|field6|field7 000000000000014|item4|baz|field2|field3|field5|field6|field7 000000000000015|itemN|baz|field2|field3|field5|field6|field7 ...

    Example One

    This example configures a custom function for preserving output order. Unfortunately, the sprintf function alone involves extra CPU time causing the manager process to fall behind. The workers may idle while waiting for the manager process to respond to the gather request.

    use strict; use warnings; use MCE::Loop; my $infile = shift or die "Usage: $0 infile\n"; my $newfile = 'output.dat'; open my $fh_out, '>', $newfile or die "open error $newfile: $!\n"; sub preserve_order { my ($fh) = @_; my ($order_id, $start_idx, $idx, %tmp) = (1, 1); return sub { my ($chunk_id, $aref) = @_; $tmp{ $chunk_id } = $aref; while ( my $aref = delete $tmp{ $order_id } ) { foreach my $line ( @{ $aref } ) { $idx = sprintf "%015d", $start_idx++; print $fh $idx, $line; } $order_id++; } } } MCE::Loop::init { chunk_size => 'auto', max_workers => 3, gather => preserve_order($fh_out) }; mce_loop_f { my ($mce, $chunk_ref, $chunk_id) = @_; my @buf; foreach my $line (@{ $chunk_ref }) { $line =~ s/\r//g; chomp $line; my ($f1,$f2,$f3,$items,$f5,$f6,$f7) = split /\|/, $line; my @items_array = split /;/, $items; foreach my $item (@items_array) { push @buf, "|$item|$f1|$f2|$f3|$f5|$f6|$f7\n"; } } MCE->gather($chunk_id, \@buf); } $infile; MCE::Loop::finish(); close $fh_out;

    Example Two

    To factor out sprintf from the manager process, another way is via MCE::Relay for incrementing the ID value. Workers obtain the current ID value and increment/relay for the next worker, ordered by chunk ID behind the scene. Workers call sprintf in parallel. This allows the manager process (out_iter_fh) to accommodate up to 32 workers and not fall behind. It also depends on IO performance, of course.

    The MCE::Relay module is loaded automatically whenever the MCE init_relay option is specified.

    use strict; use warnings; use MCE::Loop; use MCE::Candy; my $infile = shift or die "Usage: $0 infile\n"; my $newfile = 'output.dat'; open my $fh_out, '>', $newfile or die "open error $newfile: $!\n"; MCE::Loop::init { chunk_size => 'auto', max_workers => 8, gather => MCE::Candy::out_iter_fh($fh_out), init_relay => 1 }; mce_loop_f { my ($mce, $chunk_ref, $chunk_id) = @_; my @lines; foreach my $line (@{ $chunk_ref }) { $line =~ s/\r//g; chomp $line; my ($f1,$f2,$f3,$items,$f5,$f6,$f7) = split /\|/, $line; my @items_array = split /;/, $items; foreach my $item (@items_array) { push @lines, "$item|$f1|$f2|$f3|$f5|$f6|$f7\n"; } } my $idx = MCE::relay { $_ += scalar @lines }; my $buf = ''; foreach my $line ( @lines ) { $buf .= sprintf "%015d|%s", $idx++, $line } MCE->gather($chunk_id, $buf); } $infile; MCE::Loop::finish(); close $fh_out;

    Relay accounts for the worker handling the next chunk_id value. Therefore, do not call relay more than once inside the block. Doing so will cause IPC to stall.

    Regards, Mario

Easily back up all of your Github repositories and/or issues
No replies — Read more | Post response
by stevieb
on Feb 11, 2018 at 16:25

    It's been in the works at the lower-end of my priority list, but after having a bit of a bug-closing weekend, thought I'd tackle getting out an initial release of Github::Backup.

    The cloud is a great thing, until the sun evaporates it one way or another. Github, although fantastically reliable, is prone to issues just like any other site on the Internet. I'd go as far to say that even they could be prone to data loss in very rare circumstances.

    This distribution, which provides a command-line binary, allows you to quickly and easily back up your repositories and issues to your local machine. The repositories are cloned so all data is retrieved as-is as legitimate Git repos, and the issues are fetched and stored as JSON data. Useful if there was ever a catastrophic issue at Github, or simply for offline perusal of your information.

    At a basic level, you need to send in your Github username, API token (see this), a directory to stash the data retrieved, and a flag to signify you want to back up either your repos, issues or both.

    github_backup \ -u stevieb9 \ -t 003e12e0780025889f8da286d89d144323c20c1ff7 \ -d /home/steve/github_backup \ -r \ -i

    That'll back up both repos and issues. The structure of the backup directory is as follows:

    backup_dir/ - issues/ - repo1/ - issue_id_x - issue_id_y - repo2/ - issue_id_a - repo1/ - repository data - repo2/ - repository data

    Now, most don't like supplying keys/tokens/passwords on the command-line or within a script, so you can stash your Github API token into the GITHUB_TOKEN environment variable, and we'll fetch it from there instead:

    github_backup -u stevieb9 -d /home/steve/github_backup -r -i

    Full usage for the binary:

    Usage: github_backup -u username -t github_api_token -d /backup/direct +ory -r -i Options: -u | --user Your Github username -t | --token Your Github API token -d | --dir The backup directory -p | --proxy Optional proxy (https://proxy.example.com:PORT) -r | --repos Back up all of your repositories -i | --issues Back up all of your issues -h | --help Display this help page

    The API is very straightforward as well:

    use warnings; use strict; use Github::Backup; # token stashed in GITHUB_TOKEN env var my $gh = Github::Backup->new( api_user => 'stevieb9', dir => '/home/steve/github_backup' ); # back up all repos $gh->repos; # back up all issues $gh->issues;

    This is one distribution that I've released prior to being happy with my unit test regimen, so that's on the imminent to-do list. There are tests, but as always, there can never be enough. In this case, I, myself am not even happy, so if you run into any issues, please open a ticket, or reply back here.

    Going forward, I plan on adding functionality to independently back up *all* Github data for a user, not just repos and issues. I also plan to test restore operations, but that's not anything I'm considering short-term.

    Have fun!

    -stevieb

    Disclaimer: Also posted on my blog.

Shell (bash/zsh) completion for dzil
1 direct reply — Read more / Contribute
by tinita
on Feb 09, 2018 at 13:37
    Hi,
    I created shell completion scripts for dzil. The completion that is shipped with Dist::Zilla only completes subcommands, and only is for bash, as far as I can see.
    My scripts also complete options, and show the description of subcommands and options.
    See https://github.com/perlpunk/shell-completions.
    (If dzil commands change, I have to update this, too, of course.)
    I created this with https://metacpan.org/pod/App::AppSpec

    Usage:
    # bash $ git clone https://github.com/perlpunk/shell-completions.git $ cd shell-completions $ source bash/dzil.bash $ dzil <TAB> add -- add modules to an existing dist authordeps -- list your distributions author dependencies build -- build your dist clean -- clean up after build, test, or install commands -- list the applications commands help -- Show command help install -- install your dist listdeps -- print your distributions prerequisites new -- mint a new dist nop -- do nothing: initialize dzil, then exit release -- release your dist run -- run stuff in a dir where your dist is built setup -- set up a basic global config file smoke -- smoke your dist test -- test your dist $ dzil test --<TAB> --all -- enables the RELEASE_TESTING, AUTOMATED_TESTING, EX +TENDED_TESTING and AUTHOR_TESTING env variables --author -- enables the AUTHOR_TESTING env variable --automated -- enables the AUTOMATED_TESTING env variable (defaul +t behavior) --extended -- enables the EXTENDED_TESTING env variable --help -- Show command help --jobs -- number of parallel test jobs to run --keep-build-dir -- keep the build directory even after a success --keep -- keep the build directory even after a success --lib-inc -- additional @INC dirs --release -- enables the RELEASE_TESTING env variable --test-verbose -- enables verbose testing (TEST_VERBOSE env variable + on Makefile.PL, --verbose on Build.PL --verbose -- log additional output --verbose-plugin -- log additional output from some plugins only # zsh # put zsh/_dzil into a directory which is read by # zsh completions, or add this directory to your .zshrc: # fpath=("$HOME/path/to/shell-completions/zsh" $fpath) # log in again
Short GitHub Markdown emojis
No replies — Read more | Post response
by reisinge
on Jan 24, 2018 at 07:57

    I wanted to use an emoji in a README file on GitHub. Since I plan to use it often I wanted to pick a short one:

    curl -sL https://goo.gl/jjtUpD | perl -nE '/(:[^:]+:)/ && length $1 <= + 5 && say $1'
    Leave no stone unturned. -- Euripides

Add your CUFP
Title:
CUFP:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":


  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.
  • Log In?
    Username:
    Password:

    What's my password?
    Create A New User
    Chatterbox?
    and all is quiet...

    How do I use this? | Other CB clients
    Other Users?
    Others chilling in the Monastery: (8)
    As of 2018-07-20 10:52 GMT
    Sections?
    Information?
    Find Nodes?
    Leftovers?
      Voting Booth?
      It has been suggested to rename Perl 6 in order to boost its marketing potential. Which name would you prefer?















      Results (428 votes). Check out past polls.

      Notices?