http://www.perlmonks.org?node_id=1044

This section is the place to post your general code offerings -- everything from one-liners to full-blown frameworks and apps.

CUFP's
Exploring Type::Tiny Part 6: Some Interesting Type Libraries
No replies — Read more | Post response
by tobyink
on Jan 20, 2019 at 08:40

    Type::Tiny is probably best known as a way of having Moose-like type constraints in Moo, but it can be used for so much more. This is the sixth in a series of posts showing other things you can use Type::Tiny for. This article along with the earlier ones in the series can be found on my blog and in the Cool Uses for Perl section of PerlMonks.

    While Types::Standard provides all the type constraints Moose users will be familiar with (and a few more) there are other type libraries you can use instead of or as well as Types::Standard.

    Types::Path::Tiny

    If your attribute or parameter needs to accept a file or directory name, I'd strongly recommend using Types::Path::Tiny. It provides Path, File, and Dir types, plus Abs* versions of them which coerce given filenames into absolute paths. The Path::Tiny objects it coerces strings into provide a bunch of helpful methods for manipulating files.

    package MyApp::Config { use Moo; use Types::Path::Tiny qw(AbsFile); use JSON::MaybeXS qw(decode_json); has config_file => ( is => 'ro', isa => AbsFile->where(q{ $_->basename =~ q/\.json$/ }), coerce => 1, ); sub get_hash { my $self = shift; decode_json( $self->config_file->slurp_utf8 ); } }

    Nice? Types::Path::Tiny is my personal favourite third-party type library. If you're writing an application that needs to deal with files, use it.

    Types::Common::String and Types::Common::Numeric

    Types::Common::String provides a bunch of type constraints more specific than the standard Str type. If you have indicated that an attribute or parameter should be a string, it's pretty rare that you really want to allow any string. You might want to constrain it more. This type library has types like NonEmptyStr and UpperCaseStr.

    Types::Common::Numeric does the same for numbers, giving you type constraints like PositiveInt and IntRange[1,10].

    Both of these libraries come bundled with Type::Tiny, so if you're already using Types::Standard, won't add any extra dependencies to your code.

    Types::TypeTiny

    This is a type library created for Type::Tiny's internal use and gives you types like ArrayLike, HashLike, and CodeLike which allow overloaded objects.

    Again it's bundled with Type::Tiny, so won't add any extra dependencies.

    Types::DateTime

    A type library for DateTime objects, allowing them to be coerced from strings.

    has start_date => ( is => 'ro', isa => DateTimeUTC, coerce => 1, );

    The above will not only coerce the attribute to a DateTime object, but coerce it to the correct timezone.

Streaming Market Quotes from Ally Invest
No replies — Read more | Post response
by Your Mother
on Jan 04, 2019 at 14:03

    This service requires you to have an account with Ally. Their investment API is new. I don't work for them or have any professional relationship but they are often mentioned as the best online bank in the US, I've been a customer since they were GMAC (20 years), and they really came through for me once so I have no reservations in recommending them and sharing some code to use their services.

    I had trouble getting streaming to work which was embarassing because it's supposed to be the part of Perl I'm good at. :P pmqs, haukex bliako, kschwab, and vr all stepped up to help and solved my trouble. My new trouble is a websocket client—a cat to skin later—which led me back to Mojolicious. I knew it had some server support so I figured the example code might show client code. I was pleasantly shocked to see it supports it completely via its own client Mojo::UserAgent. All hail sri!

    I was even happier to see the client supports gzip content, even in streams. Might be more confusing code for some but for me it was much easier to follow. I have hesitated to move some of my personal code/practices to Mojo but this is probably the shove I needed.

    Since the problem is now solved with both libraries, I figured I should share some of the code here. Both rely on the terrific WWW::OAuth. There is a fair bit of identical boilerplate for the arguments and environment.

    WWW::Mechanize (LWP::UserAgent)

    #!/usr/bin/env perl use 5.10.0; use strictures; use WWW::Mechanize; # LWP::UserAgent is almost the same here. use WWW::OAuth; use Compress::Zlib; my @symbols = grep /\A[A-Z.]+\z/, @ARGV; die "Give a list of symbols; e.g., AAPL GHII AMZN XOM DIS PBF BABA JD +AMD VOO\n" unless @symbols; my $sym = join ",", @symbols; die "Missing ENV values: ALLY_CLIENT_ID ALLY_CLIENT_SECRET ALLY_TOKEN +ALLY_TOKEN_SECRET\n" unless $ENV{ALLY_CLIENT_ID} and $ENV{ALLY_CLIENT_SECRET} and $ENV{ALLY_TOKEN} and $ENV{ALLY_TOKEN_SECRET}; my $oauth = WWW::OAuth->new( client_id => $ENV{ALLY_CLIENT_ID}, client_secret => $ENV{ALLY_CLIENT_SECRET}, token => $ENV{ALLY_TOKEN}, token_secret => $ENV{ALLY_TOKEN_SECRET} ); my $mech = WWW::Mechanize->new( autocheck => undef ); $mech->add_handler( request_prepare => sub { $oauth->authenticate($_[0 +]) } ); my $gunzip = inflateInit( WindowBits => 16 + MAX_WBITS ) or die "Cannot create a inflation stream\n"; $mech->add_handler ( response_data => sub { my ( $response, $ua, $h, $data ) = @_; $response->content(undef); # Else will append. my ( $buffer, $status ) = $gunzip->inflate($data); die "zlib error: $status" if length $status; say $buffer; }); $mech->get("https://stream.tradeking.com/v1/market/quotes?symbols=$sym +"); __END__

    Mojo::UserAgent

    #!/usr/bin/env perl use 5.10.0; use strictures; use Mojo::UserAgent; use WWW::OAuth; my @symbols = grep /\A[A-Z.]+\z/, @ARGV; die "Give a list of symbols; e.g., AAPL GHII AMZN XOM DIS PBF BABA JD +AMD VOO\n" unless @symbols; my $sym = join ",", @symbols; die "Missing ENV values: ALLY_CLIENT_ID ALLY_CLIENT_SECRET ALLY_TOKEN +ALLY_TOKEN_SECRET\n" unless $ENV{ALLY_CLIENT_ID} and $ENV{ALLY_CLIENT_SECRET} and $ENV{ALLY_TOKEN} and $ENV{ALLY_TOKEN_SECRET}; my $oauth = WWW::OAuth->new( client_id => $ENV{ALLY_CLIENT_ID}, client_secret => $ENV{ALLY_CLIENT_SECRET}, token => $ENV{ALLY_TOKEN}, token_secret => $ENV{ALLY_TOKEN_SECRET} ); my $ua = Mojo::UserAgent->new( max_response_size => 0 ); # Stream mean +s no max. $ua->on( start => sub { $oauth->authenticate( $_[1]->req ) } ); # OAut +h all requests. my $tx = $ua->build_tx( GET => "https://stream.tradeking.com/v1/market +/quotes?symbols=$sym" ); $tx->res->content ->unsubscribe("read") ->on( read => sub { my ( $content, $bytes ) = @_; say $bytes; }); $tx = $ua->start($tx); __END__
Encrypting Source Filter
No replies — Read more | Post response
by kschwab
on Dec 28, 2018 at 14:48

    After reading bliako's post, I was curious.

    I wanted to experiment a bit with his "I want to be asked for a password/key at encryption stage and then asked just once" requirement. I didn't get it working with PAR, or any ability to also encrypt used modules, so it's not acceptable as an answer there. Might be useful to play with though. Works in a way that's similar to Acme::Bleach, other than I don't overwrite the original file. You'll need to have a working openssl binary in your PATH somewhere.

    Feedback welcome.

    Save this as "AESFilter.pm"...
    package AESFilter; use IPC::Open2; our $openssl="openssl enc -aes-256-cbc -a"; our $marker = '#AESFilter'; sub encrypt { $_[0]=~s/$marker//gs; my $pid=open2(my $rdr,my $wrt,"$openssl 2>>/dev/null"); print $wrt $_[0]; close $wrt; my $output; while(<$rdr>) {$output.=$_}; close $rdr; waitpid($pid,0); my $status=$?>>8; if ($status !=0) { die("Exit status $status from openssl, encryption failed\n"); } return $output; } sub decrypt { my $pid=open2(my $rdr,my $wrt,"$openssl -d 2>>/dev/null"); print $wrt $_[0]."\n"; close $wrt; my $output; while(<$rdr>) {$output.=$_}; close $rdr; waitpid($pid,0); my $status=$?>>8; if ($status != 0) { die("Exit status $status from openssl, decryption failed\n"); } return $output; } open(IN,$0) or die "Can't open [$0]: $!\n"; my $prior=''; my $code=''; my $seen=0; while(<IN>) { if ($seen) { chomp; $code .= $_; next; } $prior .= $_; if (/use AESFilter;/) { $seen=1} } close IN; if ($code =~ s/^$marker//gm) { my $clear=decrypt($code); eval($prior.$clear); print STDERR $@ if $@; exit; } my $outfile=$0.".enc"; die "Encrypted file [$outfile] already exists\n" if (-e $outfile); my $encrypted=encrypt($code); open(OUT,">$outfile") or die "Can't open [$outfile] for write: $!\n"; printf OUT "%s%s\n%s",$prior,$marker,$encrypted; close OUT; exit; 1;

    To play with it, create a script like what's below. The first time you run it, it will create an encrypted script, with an extension of ".enc". So, if your script is called "foo", it creates a new file called "foo.enc" that's encrypted. It's calling openssl to get a password, so you'll be prompted for a password.

    #!/usr/bin/perl # so that you don't have to install AESFilter.pm, just # have it in your current dir use lib "."; # anything before the next line will be in the output in cleartext # ...anything after will be encrypted use AESFilter; print "test123\n"; print "again\n"; for ("one","two","three","four") { print $_."\n"; }
    If you save the code in a file called "foo", and run it once (with a password of '0'), it will produce a file called "foo.enc", that looks like this:
    #!/usr/bin/perl # so that you don't have to install AESFilter.pm, just # have it in your current dir use lib "."; # anything before the next line will be in the output in cleartext # anything after will be encrypted use AESFilter; #AESFilter U2FsdGVkX1/CjxWDKOh4Xdw/7c0PoKnkUFQsf5gxo3F7RXqcEtmdsAgeEmb1g/QO qd82hklpUxP/SNzbs34Z2NdzEStaDpeTlke1unf18gAw/2hlu78CIIItHVuAZlrH ovJhqCBhP0Rck1RwXt3cJw==
    And, if you run that code, it will prompt for the password.
DBD::Pg insert data into an array
1 direct reply — Read more / Contribute
by rdfield
on Dec 19, 2018 at 12:59

    An internet search didn't show up an example of inserting data into a PostgreSQL array (quite a few for retrieving data, though), so I thought I'd post this here.

    Given a table:
    create table my_table( id serial, description varchar(255), associated_str_data varchar(255)[], associated_int_data integer[]);
    just using arrayrefs for data supplied to the placeholders does the job:
    my $desc = "A description"; my @assoc_str_data = ("string 1", "string 2"); my @assoc_int_data = (1,2,3); $dbh->do("insert into my_table(description, associated_str_data, assoc +iated_int_data) values (?,?,?)", undef, $desc, \@assoc_str_data, \@assoc_int_data);

    Checking the result in psql shows the data inserted OK:

    # select * from my_table; id | description | associated_str_data | associated_int_data ----+---------------+-------------------------+--------------------- 2 | A description | {"string 1","string 2"} | {1,2,3} (1 row)

    rdfield

cron-explain.pl -- cron next appointments
1 direct reply — Read more / Contribute
by Discipulus
on Nov 27, 2018 at 07:56
    Hello nuns and monks!

    dont know if is so cool.. anyway I was playing with cron entries while I discovered Algorithm::Cron and it's cool method next_time that suddenly provoked a oneliner moment: perl -MAlgorithm::Cron -E "say scalar localtime (Algorithm::Cron->new(base => 'local',crontab => $ARGV[0])->next_time(time)) " "0 9 12 4 *" which prints Fri Apr 12 09:00:00 2019

    But this was not cool enough. The following program by other hand parse a cron file or the output of cron -l or some input pasted and shows an ordered list of next commands cron will run.

    It accepts input from a pipe, from a file with -f filename and if nothing was given it expects some input to be pasted followed by CTRL-Z or CTRL-D (windows or linux).

    Instead of the above input you can use -c "crontab-entry" to parse a solitary crontab entry.

    And dulcis in fundo, with the -n N parameter the program will show next N occurrences of the scheduled programs

    use strict; use warnings; use Getopt::Long; use Algorithm::Cron; my $file; my $howmany; my $crontab; my $help; my @lines; my $helptext = "USAGE:\n $0 [ [-f filename | -c STRING] -n N]\n\n". " $0 -f filename\n". " $0 -f filename -n 2\n". " crontab -l | $0 \n". " crontab -l | $0 -n 2\n". " cat filename | $0\n". " cat filename | $0 -n 3\n". " $0 -c 'crontab entry'\n". " $0 -c 'crontab entry' -n 5\n". " $0 (paste some content followed by CTRL-D or CTRL-Z on +a newline)\n". " $0 -n 4 (paste some content followed by CTRL-D or CTRL- +Z on a newline)\n". " $0 -h (print this help)\n"; GetOptions ("f|file=s" => \$file, "n=i" => \$howmany, "c|crontab=s" => \$crontab, "h|help" => \$help) or die( $helptext ); print $helptext and exit if $help; if ( $crontab ) { @lines = $crontab } elsif ( $file and -e -f -r $file){ open my $fh, '<', $file or die; @lines = <$fh>; } else{ @lines = <> } foreach my $line ( sort { Algorithm::Cron->new( base => 'local', crontab => join' ',(split /\s+|\t/,$a)[0..4])->next_ti +me(time) <=> Algorithm::Cron->new( base => 'local', crontab => join' ',(split /\s+|\t/,$b)[0..4])->next_ti +me(time) } grep { /^(\d|\*)/ } @lines ){ my @parts = split /\s+|\t/,$line; my $now = time; my $repeat = $howmany; print scalar localtime ( Algorithm::Cron->new( base => 'local', crontab => join' ', @parts[0..4])->next_ti +me($now) ); print " => ( @parts[0..4] )",($crontab ? "" : " => @parts[5..$#par +ts]"),"\n"; if ( --$repeat ){ while( $repeat > 0){ $now = Algorithm::Cron->new( base => 'local', crontab => join' ', @parts[0..4])->next_time( +$now ); print scalar localtime ( Algorithm::Cron->new( base => 'lo +cal', crontab => join' ', @parts[0..4])->next_ti +me($now) ); print "\n"; $repeat--; } } }

    given a sample file the following example show the usage:

    cat crontab.txt # Crontab Environmental settings SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root 00 3 * 7 0 /path/to/command #15 20 * 1-7 * /path/to/command2 */30 7,21 1-15 1 * /path/to/another/command # m h dom mon dow user command 21 * * * * root cd / && run-parts --report /etc/cron.hourly 0,30 6 * * * root test -x /usr/sbin/blah #47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-pa +rts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/blahblah || ( cd / && run-pa +rts ) cat crontab.txt | perl cron-explain.pl Tue Nov 27 14:21:00 2018 => ( 21 * * * * ) => root cd / && run-parts - +-report /etc/cron.hourly Wed Nov 28 06:00:00 2018 => ( 0,30 6 * * * ) => root test -x /usr/sbin +/blah Sat Dec 1 06:52:00 2018 => ( 52 6 1 * * ) => root test -x /usr/sbin/b +lahblah || ( cd / && run-parts ) Tue Jan 1 07:00:00 2019 => ( */30 7,21 1-15 1 * ) => /path/to/another +/command Sun Jul 7 03:00:00 2019 => ( 00 3 * 7 0 ) => /path/to/command cat crontab.txt | perl cron-explain.pl -n 3 Tue Nov 27 14:21:00 2018 => ( 21 * * * * ) => root cd / && run-parts - +-report /etc/cron.hourly Tue Nov 27 15:21:00 2018 Tue Nov 27 16:21:00 2018 Wed Nov 28 06:00:00 2018 => ( 0,30 6 * * * ) => root test -x /usr/sbin +/blah Wed Nov 28 06:30:00 2018 Thu Nov 29 06:00:00 2018 Sat Dec 1 06:52:00 2018 => ( 52 6 1 * * ) => root test -x /usr/sbin/b +lahblah || ( cd / && run-parts ) Tue Jan 1 06:52:00 2019 Fri Feb 1 06:52:00 2019 Tue Jan 1 07:00:00 2019 => ( */30 7,21 1-15 1 * ) => /path/to/another +/command Tue Jan 1 07:30:00 2019 Tue Jan 1 21:00:00 2019 Sun Jul 7 03:00:00 2019 => ( 00 3 * 7 0 ) => /path/to/command Sun Jul 14 03:00:00 2019 Sun Jul 21 03:00:00 2019 perl cron-explain.pl -c "2-5 9 12 4 *" -n 4 Fri Apr 12 09:02:00 2019 => ( 2-5 9 12 4 * ) Fri Apr 12 09:03:00 2019 Fri Apr 12 09:04:00 2019 Fri Apr 12 09:05:00 2019

    have fun!

    L*

    There are no rules, there are no thumbs..
    Reinvent the wheel, then learn The Wheel; may be one day you reinvent one of THE WHEELS.
GeoIP revisited
3 direct replies — Read more / Contribute
by cavac
on Nov 23, 2018 at 07:35

    A couple of days ago i looked into GeoIP in Perl+PostgreSQL+GeoIP = Awesome!. Since then i have learned that the GeoIP lists i was using were out of support. The new public lists are in a new format. So i took the opportunity to rewrite the whole thing and do pretty much everything with Perl, not using external commands like "unzip" and "wget". This should make things a bit more portable.

    I'm sorry, it isn't written "nice" and isn't really documented. I designed it as a cron job for a single private server ;-)

    I'm still calling the perl interpreter from a bash script so i can set the correct environment variables and stuff. But it's a lot smaller now:

    #!/usr/bin/env bash . ~/.bashrc_activestate cd /home/myuser/src/geoip perl updategeoip.pl

    The database tables stays exactly the same as in the last post, here again for reference:

    CREATE TABLE geoip ( netblock cidr NOT NULL, country_code text NOT NULL, country_name text NOT NULL, CONSTRAINT geoip_pk PRIMARY KEY (netblock) USING INDEX TABLESPACE "NAMEOFINDEXTABLESPACE" ) WITH ( OIDS=FALSE ) TABLESPACE "NAMEOFDATATABLESPACE"; ALTER TABLE geoip OWNER TO "mydatabaseuser";

    And here is the new all-in-one script:

    This script uses the newish "GeoLite2" databases from MaxMind. If you use them, please make sure you comply to the open source licensing stated on their official page

    .
    perl -e 'use MIME::Base64; print decode_base64("4pmsIE5ldmVyIGdvbm5hIGdpdmUgeW91IHVwCiAgTmV2ZXIgZ29ubmEgbGV0IHlvdSBkb3duLi4uIOKZqwo=");'
Perl+PostgreSQL+GeoIP = Awesome!
2 direct replies — Read more / Contribute
by cavac
on Nov 21, 2018 at 04:23

    EDIT: WARNING, THIS USES A LEGACY DATABASE THAT IS NOT UPDATED ANYMORE. Please take a look at GeoIP revisited for an updated version that uses an up-to-date version of the MaxMind GeoIP database.


    Sometimes you have to work with GeoIP, e.g. mapping an IP address to the origin country. Be it for legal reasons (geoblocking) or just so you know where your target audience is coming from.

    You could just make online lookups for every request. But if you are running a PostgreSQL database backend anyway, there is a simple way to do it in DB, since PostgreSQL supports a CIDR column type.

    First, let us define a database table:

    CREATE TABLE geoip ( netblock cidr NOT NULL, country_code text NOT NULL, country_name text NOT NULL, CONSTRAINT geoip_pk PRIMARY KEY (netblock) USING INDEX TABLESPACE "NAMEOFINDEXTABLESPACE" ) WITH ( OIDS=FALSE ) TABLESPACE "NAMEOFDATATABLESPACE"; ALTER TABLE geoip OWNER TO "mydatabaseuser";

    Next, we need a bash script we can run from crontab for our daily update:

    Of course, now that the up-to-date geoip lists are in the database, it's even possible to use an ON INSERT OR UPDATE trigger to any table that needs geoip data. But that i will leave as an excercise for the reader...

    perl -e 'use MIME::Base64; print decode_base64("4pmsIE5ldmVyIGdvbm5hIGdpdmUgeW91IHVwCiAgTmV2ZXIgZ29ubmEgbGV0IHlvdSBkb3duLi4uIOKZqwo=");'
Rough Delta-V estimate for Hohmann transfer orbits
No replies — Read more | Post response
by cavac
on Nov 20, 2018 at 06:30

    I've been interested in space stuff as far back as i can remember. Quite a while back, Scott Manley had a series on the Youtubes explaining how to calculate orbital changes and i implemented this in Javascript on my Blog.

    But i always wanted a command line tool in Perl, so lets get to implementing it. It's not the nicest piece of code and it isn't optimizing the transfer burn (check the different options when to do the burns to minimize Delta-V), nor does it try for things like three or (more) burns/multiple intermediate orbits. But it gives a rough idea of "how bad did they miss the target orbit".

    Here is the code (long, so in a "readmore" tag):

    The basic commandline is perl lithobreak.pl 1000/e0.5/0 1000/1000/10

    First argumnent is the source orbit (initial orbit), the second is the target orbit. You can define each orbit in two ways, either using periapsis/apoapsis/inclination or semimajor-axis/eccentricity/inclination. All measurements in kilometers above earth surface. This example uses both:

    1. Source orbit of 1000/e0.5/0: Semi major axis of 1000km, "e" defined eccentricity mode with an eccentricity of 0.5, and an inclination of 0 degrees. Internally, this gets converted to a periapsis of 500km and an apoapsis of 1500km. Could have also been written as 500/1500/0
    2. Target orbit of 1000/1000/10: Circular orbit of 1000km with a 10 degree inclination. Could have also been written as 1000/e0/10.

    Note: when using ap/per mode, don't worry about using the wrong order, it gets sorted out internally.

    And here is the result of our calculation:

    Constants: Earth radius: 6371000 meter Earth mass: 5.97219e+24 kg Gravity: 6.67384e-11 mu: 398574405096000 Source orbit is using eccentricity notation, converting... Converting to meter... Adding earth radius... Orbital parameters for Source orbit: Periapsis: 6871000 m Apoapsis: 7871000 m Inclination: 0 deg Converting to meter... Adding earth radius... Orbital parameters for Target orbit: Periapsis: 7371000 m Apoapsis: 7371000 m Inclination: 10 deg Calculating semi major axis... Defining Hohman orbit... Orbital parameters for Hohman orbit: Periapsis: 6871000 m Apoapsis: 7371000 m Inclination: 0 deg Calculating first burn... Data for burn at source orbit periapsis: Speed before burn: 7870.39409917514 m/s Speed after burn: 7748.85334888779 m/s Required Delta-V: 121.540750287353 m/s Calculating second burn... Need to include plane change in second burn... Data for burn at Hohman orbit apoapsis: Speed before burn: 7223.22227109049 m/s Speed after burn: 7353.45599234394 m/s Inclination before burn: 0 deg Inclination after burn: 10 deg Inclination change: 10 deg Required Delta-V: 1277.04850388302 m/s Total Delta-V requirement: 1398.58925417037 m/s

    Yeah, 1.3km/s required Delta-V. Not good. Better call our Kerbals to build us a new sat, this one isn't going to make it...

    perl -e 'use MIME::Base64; print decode_base64("4pmsIE5ldmVyIGdvbm5hIGdpdmUgeW91IHVwCiAgTmV2ZXIgZ29ubmEgbGV0IHlvdSBkb3duLi4uIOKZqwo=");'
curl2lwp - convert Curl command line arguments to LWP / Mechanize Perl code
1 direct reply — Read more / Contribute
by Corion
on Nov 14, 2018 at 13:07

    After inspiration by Your Mother and some slight hacking, I'm very proud to announce HTTP::Request::FromCurl, together with its companion online site at https://corion.net/curl2lwp.psgi. The module and included curl2lwp program allow you to easily convert curl command lines to Perl code that uses LWP::UserAgent resp. WWW::Mechanize.

    Some code:

    curl2lwp.pl -X GET -A xpcurl/1.0 https://perlmonks.org/

    outputs the following code:

    my $ua = WWW::Mechanize->new(); my $r = HTTP::Request->new( 'GET' => 'https://perlmonks.org/', [ 'Accept' => '*/*', 'Host' => 'perlmonks.org:443', 'User-Agent' => 'curl/1.0', ], ); my $res = $ua->request( $r, );

    The online version creates a bit more code, as the output there is likely not consumed by advanced Perl programmers.

    The module parses a subset of the valid curl command lines and generates equivalent code for LWP::UserAgent for it. Support for other HTTP user agents (Mojo::UserAgent, AnyEvent::HTTP, HTTP::Future) is not yet implemented but I welcome contributions there.

    The app driving the online interface is not yet released onto CPAN, but as it is mostly HTML scaffolding and some Javascript, it should be released fairly soon.

Google API Browser
No replies — Read more | Post response
by localshop
on Nov 12, 2018 at 09:42

    As I continue my pilgrimage to becoming passably proficient with Mojo and Google Cloud Services I have been tinkering away with WebService::GoogleAPI::Client and as a working example I was reasonably happy with the ease with which I could produce a basic Google API Explorer that presents the method and parameters of all the Google Discoverable API Endpoints. This is proving a handy starting point to constructing working examples accessing the APIS.

    I plan to extend this to firstly include required scopes, then provide OpenAPI YAML and perhaps ultimately replicate many of the features of Google's API Explorer.

    You can see the Mojo Application running as a Hypnotoad socket served application under CPANEL/WHM hosted environment at https://pscott.com.au.

    Today I'm working on the Google Drive API Example available in the Github Repo as a demo of an alternative approach to using a dedicated CPAN module such as the just released Net::Google::Drive

    If anybody has any interesting use cases requiring access to Google Cloud Services let me know. I'm trying to add a new example every few days.

Binary vs. linear search
3 direct replies — Read more / Contribute
by reisinge
on Nov 12, 2018 at 07:46

    I was trying to get my head around the binary search algorithm. I did it by comparing it to the linear search algorithm

    #!/usr/bin/perl use warnings; use v5.14; # Find this word ... my $find = shift // ""; # ... in this sorted list of words ... my @words = qw(alpha bravo charlie delta echo foxtrot golf hotel india juliett k +ilo lima mike november oscar papa quebec romeo sierra tango uniform v +ictor whiskey xray yankee zulu); # ... using two search algorithms my %search = ( linear => \&linsearch, binary => \&binsearch, ); for my $alg ( sort keys %search ) { say "$alg searching '$find' in [@words] ..."; my $idx = $search{$alg}->( $find, \@words ); say defined $idx ? "found at index $idx" : "not found"; say ""; } sub binsearch { my ( $find, $array ) = @_; my $low = 0; my $high = @$array - 1; while ( $low <= $high ) { my $try = int( ( $low + $high ) / 2 ); say "--> trying at index $try"; $low = $try + 1, next if $array->[$try] lt $find; $high = $try - 1, next if $array->[$try] gt $find; return $try; } return; } sub linsearch { my ( $find, $array ) = @_; for ( my $i = 0 ; $i < @$array ; $i++ ) { my $try = $i; say "--> trying at index $try"; if ( $array->[$try] eq $find ) { return $try; } } return; }
    Genius is 1 percent inspiration and 99 percent perspiration. -- Thomas Edison
Achievements Steaming ahead
No replies — Read more | Post response
by GrandFather
on Nov 12, 2018 at 01:09

    I play a bit of Civ V. Like many games these days I downloaded it using Steam and one of the "features" of the system is that Steam keeps track of various in game achievements. So, silly me, I've been sucked into playing the "get the achievements" meta game.

    Which is all very fine, but Steam doesn't do a great job of showing you the achievements. It doesn't seem able to sort them in the order that they have been achieved so sometimes it can be hard to know if you got that last thing or not. So, Perl to the rescue! The following script parses the Copy & Pasted achievements list from the Steam page and sorts them, first list the "not yet achieved" entries, then the achieved items sorted by the order that they were achieved.

    use strict; use warnings; my %monthOrd = ( Jan => 1, Feb => 2, Mar => 3, Apr => 4, May => 5, Jun => 6, Jul => 7, Aug => 8, Sep => 9, Oct => 10, Nov => 11, Dec => 12, ); my @records = grep {/\n/ and /[^\n]/} do {local $/ = "\n\n"; <DATA>}; s/^\n+|\n$//gs for @records; for my $record (@records) { my @lines = split "\n", $record; $lines[0] //= ''; my ($day, $month, $year, $time) = $lines[0] =~ /(\d+)\s+(\w+)(?:, +(\d+))?\s+\@\s+(\S+)/; if(defined $month && exists $monthOrd{$month}) { $year ||= 2018; $lines[0] = sprintf "%04d %2d %2d %7s", $year, $monthOrd{$mont +h}, $day, $time; } else { unshift @lines, ''; } $record = \@lines; } # Sort records by not achieved first. Remove blank lines from records @records = map {$_ = [grep {$_} @$_]; $_} sort {$a->[0] cmp $b->[0]} @ +records; print join "\n", @$_, '', '' for @records; __DATA__ Unlocked 18 Feb, 2016 @ 11:39pm First in the Hearts of Your Countrymen Beat the game on any difficulty setting as Washington. Unlocked 2 Jun, 2016 @ 7:48pm Video et Taceo Beat the game on any difficulty setting as Elizabeth. Unlocked 18 Nov, 2017 @ 4:57pm Vivre La Revolution Beat the game on any difficulty setting as Napoleon. Unlocked 25 Apr, 2016 @ 9:44am Blood and Iron Beat the game on any difficulty setting as Bismarck. Red Badge of Courage Win the Civil War scenario on Deity. Pickett's Recharge Capture Gettysburg with a Confederate Infantry unit possessing the Geo +rge Pickett promotion. Sheridan's Valley Campaign As Union, control Winchester, Front Royal, Harrisonburg, Staunton, and + Lynchburg.

    Prints:

    Red Badge of Courage Win the Civil War scenario on Deity. Pickett's Recharge Capture Gettysburg with a Confederate Infantry unit possessing the Geo +rge Pickett promotion. Sheridan's Valley Campaign As Union, control Winchester, Front Royal, Harrisonburg, Staunton, and + Lynchburg. 2016 2 18 11:39pm First in the Hearts of Your Countrymen Beat the game on any difficulty setting as Washington. 2016 4 25 9:44am Blood and Iron Beat the game on any difficulty setting as Bismarck. 2016 6 2 7:48pm Video et Taceo Beat the game on any difficulty setting as Elizabeth. 2017 11 18 4:57pm Vivre La Revolution Beat the game on any difficulty setting as Napoleon.
    Optimising for fewest key strokes only makes sense transmitting to Pluto or beyond
Generating SVG Badges with Perl
No replies — Read more | Post response
by haukex
on Nov 10, 2018 at 09:20

    If you've looked at projects on GitHub or other sites, you may have noticed the little rectangular badges (aka shields) saying things like "build passing" with a nice green background (or yellow/red if something is wrong), and if you've used Travis CI or Coveralls, you probably know how to get these badges. There is also a nice service that can generate these badges for you, Shields.io, which is written in JavaScript.

    There didn't seem to be anything like that for Perl, so I wrote Badge::Simple!

    Thanks to a super quick turnaround by Kenichi Ishigaki, these badges are already integrated into CPANTS: Try accessing https://cpants.cpanauthors.org/dist/Your-Dist-Name.svg or https://cpants.cpanauthors.org/author/YOURCPANID.svg and you should see a badge for your Kwalitee score (PNG badges have also been available for quite some time, just use .png instead of .svg).

    In the respository, you can find the script cpantesters.pl, which you can use to generate CPAN Testers badges for your CPAN modules. There is an open issue to perhaps integrate this directly with CPAN Testers, if anyone would like to take a stab at that, please feel free :-)

    You can see all this in action, for example, in the Badge-Simple readme: the "Kwalitee" and "CPAN Testers" badges were generated with Perl!

[WEBPERL] dynamically importing non-bundled modules via http
2 direct replies — Read more / Contribute
by LanX
on Nov 09, 2018 at 10:07
    (in continuation to webperl: fetching data over the web)

    Webperl requires at the moment to statically bundle all needed modules.

    The following Proof of Concept shows how to dynamically use non-bundled modules.

    The modules need to be pure Perl and have to be present inside the $WEBLIB directory on your server (respecting the same domain policy avoids much trouble)

    I just copied the desired libs from my installation and listed allowed modules to %INC_FETCH (to limit unnecessary traffic) .

    The following page demonstrate how to use Data::Dump from weblib.

    Data::Dump is currently not bundled with Web-Perl. (Data::Dumper is since it's core)

    The mechanism of adding a call-back to @INC to dynamically fetch modules from various sources is described in require

    <!doctype html> <html lang="en-us"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>WebPerl &lt;script&gt; Demos</title> <script type="text/javascript"> function xfetch(url) { var req = new XMLHttpRequest(); req.open('GET', url , false); req.send(null); if(req.status == 200) { return req.responseText; } else { return ""; } } // alert(xfetch('http://localhost:5000/lib/Data/Dump.pm')); // alert(xfetch('http://localhost:5000/webperl.js')); </script> <script src="webperl.js"></script> <!-- Please see the documentation at http://webperl.zero-g.net/using.h +tml --> <script type="text/perl"> use warnings; use strict; use Data::Dumper; use WebPerl qw/js/; BEGIN { my $WEBLIB = 'localhost:5000/lib'; # gather source of module my $fetch = sub { my ($module_path) = @_; return js("window.xfetch('http://$WEBLIB/$module_path')"); }; # allowed modules in weblib my %INC_FETCH = ( 'Data/Dump.pm' => 1, ); # loader hook for @INC my $loader = sub { my ($sub,$filename) = @_; if ( $INC_FETCH{$filename}) { my $source = $fetch->($filename); unless ($source) { warn "Fetching $filename from $WEBLIB failed"; return; } open my $fh_source, "<", \$source; my $pre = ''; #$pre = qq{warn '*** Loading $filename ***';}; return (\$pre, $fh_source); } return; }; push @INC, $loader; } use Data::Dump qw/pp/; my $HoA = { map { $_ => [reverse 1..5] } "a".."d" }; warn pp $HoA; #use Data::Dumper; #warn Dumper $HoA; </script> <!-- Optional STDOUT/STDERR text area (if you don't use this, output g +oes to Javascript console) --> <script> window.addEventListener("load", function () { document.getElementById('output') .appendChild( Perl.makeOutputTextarea() ); }); </script> </head> <body> <p>This is a demo of <a href="http://webperl.zero-g.net" target="_blan +k">WebPerl</a>!</p> <div id="output"></div> <div id="buttons"> <button id="my_button">Testing!</button> </div> </body> </html>

    OUTPUT:

    { a => [5, 4, 3, 2, 1], b => [5, 4, 3, 2, 1], c => [5, 4, 3, 2, 1], d => [5, 4, 3, 2, 1], } at /tmp/scripts.pl line 56.

    DISCLAIMER: This code is beta and follows the release often paradigm.

    Successfully tested with Chrome AND Firefox. FF showed new "Content Security Policy" problems, I didn't have the time to dig into and install the necessary PLACK CORS modules. °

    The principle is universal, as soon as webperl can run in a browser and has the capacity to dynamically fetch code, using unbundled (pure) Perl modules becomes trivial.

    Cheers Rolf
    (addicted to the Perl Programming Language :)
    Wikisyntax for the Monastery FootballPerl is like chess, only without the dice

    °) works like a charm in FF , forgot to disable "noscript" filtering for localhost! ;)

read an mp3 stream and save tracks using icy-metaint (ICY protocol)
No replies — Read more | Post response
by Discipulus
on Nov 08, 2018 at 03:12
    Hello folks!

    All begun when I discovered that my shoutcast bookmark was no more opening the usual list of internet strreaming ordered by genre..

    I said to myself I know a bit of Perl.. I'll do my own way.. and I started investigating mp3 streams (fulling my screen of binary #*%_ many times ;) and even if I've not reached my original goal I ended with someting useful and cool

    The program will listen an mp3 stream and save each song with the correct title, skipping advertisements if possible and doing it's best to produce an entire song in each file

    The POD documentation describes what I have understood of the ICY protocol (undocumented).

    Suggestions and improvements welcome, have fun!

    use strict; use warnings; use Net::HTTP; use URI; use File::Spec; use Getopt::Long; $|++; my $VERSION = 23; # SOME DEFAULTS my $debug = 0; # --extraparts default value my $cache_size = 2; my $url; my $agent = 'Stream-Cutter-v'.$VERSION; unless ( GetOptions ( 'url=s' => \$url, 'agent=s' => \$agent, 'extraparts|cache=i' => \$cache_size, 'debug=i' => \$debug, + )) {die "$0 -url URL [-agent STRING -extraparts N -debug [0-2] +]"} unless ( $url ){ print "which URL you want to open?"; $url = <STDIN>; chomp $url; } # OTHER VARIABLES # chunk number for debug purpose my $num = 0; # cache used to have more chunks wrote to a file when new song starts my @cache; # used to append to previous file # how_many parts will be equal to $cache_size when new song begin my %previous_file = ( name => undef, how_many => 0); my ( $socket, $icymetaint ) = open_connection( $url ); die "unable to get icy-metaint!" unless defined $icymetaint and $icymetaint > 0; read_stream( $socket, $icymetaint ); ###################################################################### +######### sub open_connection { my $url = shift; my $uri = URI->new( $url ); my $sock = Net::HTTP->new( Host => $uri->host, PeerPort => $uri->port, ) or die $@; $sock->write_request( GET => $uri->path, 'User-Agent' => $agent, # very important: ask for metadata! 'Icy-MetaData' => 1 ) or die $@; my ($http_code, $http_mess, %headers) = $sock->read_response_heade +rs; print join ' ', "\nConnecting to:\t",$uri->as_string, "\nStatus:\t",$http_code,$http_mess,"\n"; # go on if everything is OK 200 if ( $http_code == 200){ # grab useful headers and set them to empty string if undefine +d map {$headers{$_} = $headers{$_} // ''} 'Server','icy-name',' +icy-name', 'icy-genre','icy-br'; print join "\n","Server:\t".$headers{'Server'}, "name:\t".$headers{'icy-name'}, "genre:\t".$headers{'icy-genre'}, "byte rate:\t".$headers{'icy-br'}."kb/s\n\n"; if ( $debug ){ print "HEADERS:\n", (map {qq(\t$_\t=>\t$headers{$_}\n)} grep{defined $headers{$_}} %headers),"\n\n"; } return ($sock, $headers{'icy-metaint'}); } # return undef if not OK 200 else { print "Errors opening the given site..\n"; return undef; } } ###################################################################### +######### sub read_stream { my ($socket, $metaint) = @_; # output filehandle my $out; my $new_metadata; my $file_name; while( 1 ) { my $buffer; # READ the chunk of music $socket->read($buffer, $metaint); # CHECK for new metadata if ( $new_metadata = read_meta($socket)){ # WRITE and get back the NEW filehadle $out = write_stream( $buffer, $out, $new_metadata ); + } else{ # WRITE and get back the OLD filehadle $out = write_stream( $buffer, $out ); } } } ###################################################################### +######### sub read_meta{ my $socket = shift; my ( $metalen, $metabyte); $socket->read($metabyte, 1); $metalen = unpack("C",$metabyte) * 16; if( $metalen > 0) { # We have NEW metadata! JOY print "[$metalen metadata] " if $debug > 1; my $metadata; $socket->read($metadata, $metalen); $metadata = unpack("A$metalen", $metadata); print "\nMETADATA: [",$metadata,"]\n" if $debug > 1; return $metadata; } else { return undef; } } ###################################################################### +######### sub write_stream{ my ($buf, $out, $new_metadata) = @_; # count the overall chunk count for debug purpose $num ++; # NEW song got from metadata if ( $new_metadata ){ my $track_name = $1 if $new_metadata =~ /^StreamTitle='([^ +;]*)';/i; # if StreamTitle is empty probably is an advertisement. Fo +re example: # METADATA: [StreamTitle='';StreamUrl='';adw_ad='true'; # durationMilliseconds='20009';adId='12161';insertionType= +'preroll'; print "\ncurrently playing:\t". ($track_name ? $track_name : '**advertisement**'). +"\n"; if ($out and fileno $out and $cache_size){ print "writing part number [$num] to current file\n" i +f $debug; # DOUBLE write of the current buff print $out $buf ; } my $file_name; ($file_name = $track_name) =~ s/\s+/_/g; $file_name =~ s/\/\\:\*\?\"<>\|//g; $file_name.='.mp3'; # if StreamTitle is empty probably is an advertisement $file_name = File::Spec->devnull() unless $track_name; # set previous filename, but still how_many = 0 $previous_file{name} = $file_name; # the new file open $out, '>', $file_name or die "unable to write to $fil +e_name!"; binmode $out; if ( $cache_size > 0 ){ # PREPEND cache items to the new opened file while ( my $cache_item = shift @cache ) { print "writing cached part to new file: $file_name +\n" if $debug; print $out $cache_item; } } # WRITE $buf to a new file print "writing part number [$num] to new file: $file_name\ +n" if $debug; print $out $buf; } # no new track.. else { print "$num " if $debug > 1; # WRITE $buf to the already opened file if ( $out and fileno $out ){ print $out $buf or die; } # check previous_file if needed to be appended if ( $previous_file{name} and $previous_file{how_many} ){ print "appending part to previous file too\n" if $debug; open my $oldfh, '>>', $previous_file{name} or die "unable to open $previous_file{name} in ap +pend mode!"; binmode $oldfh; print $oldfh $buf or die "unable to write!"; close $oldfh or die "unable to close filehandle!"; $previous_file{how_many}--; } else{ $previous_file{name} = undef; $previous_file{how_many} = $cache_size ; } } # cache rotates.. if ( $#cache == $cache_size - 1 ){ shift @cache, } push @cache, $buf; # return the current file handle return $out; } __DATA__ =head1 NAME C<mp3streamcutter.pl> This program open an mp3 stream and save songs to distinct files. It's + intended to understand the ICY protocol and not intended to save copirighted da +ta. =head1 SYNOPSIS mp3streamcutter.pl -url URL [-agent STRING -extraparts N -debug 0- +2] --url URL is the only necessary argument. Url must be complete of the protoc +ol --agent STRING you can use a custom user-agent to send to server during the conne +ction. Agent defaults to Stream-Cutter-v with the version number of the p +rogram appended. You can find useful to use the string WinampMPEG/2.9 if +refused by some server --extraparts N This parameter governs how many extra parts of the stream have to +be prepended to a new file (via cache) and appended to the previous file (via reopening and appending). --extraparts defaults to 2 that is the b +est I found to have an entire song to the correct file and not to much junk in + it (parts of other songs). --cache is an alias for --extraparts --debug 0-2 With -debug 0 only few details of the server and the title of the +current song will be displayed. With -debug 1 also headers received from the server are shown and +all operations involving new files creation and extra parts possibly (see --extra +parts) wrote to these files Debug level 2 will display also each metadata received (if it cont +ains data) and a progressive number for each chunk of music received =head1 DESCRIPTION This program was inspired by a post wrote by qbxk for perlmonks (see r +eferences). The core part of the program is just a rewrite of the original code by + qbxk The ICY protocol is not well documented. It's build on top of the HTTP + one. This program can help you to understand it in a better way. Basically music + chunks are intercalated with metadata chunks at the position given by the C<icy-m +etaint> header value. At this position you will find a lone byte indicating the lengt +h of the following metadata. If this byte is not 0 but N, then the following N +bytes will be of metadata. Normally in the metadata you find the C<StreamTitle> cont +aining the title of the current song. You can also find the C<StreamUrl> generally empt +y and other things like C<adw_ad> related to advertisements, followed by the duration of +the advertisement and other characteristics of the advertisement. So a typical chunk of metadata for a new song in the stream will be li +ke: C<StreamTitle='Beethoven - Goldberg Variations';StreamUrl='';> or sometimes just like: C<StreamTitle='The Clash - Loose this skin';> without the C<StreamUrl> part, while an advertisemente will look like: C<StreamTitle='';StreamUrl='';adw_ad='true';durationMilliseconds='2000 +9';adId='12161';insertionType='preroll';> The current version of the program will try to skip advertisements che +cking for empty C<StreamTitle> and then using C<File::Spec>'s C<devnull()> a +s filename to save the stream. In the headers of the HTTP request you had to ask for C<Icy-MetaData>, + then the server will answer with various icy headers, notably C<icy-metaint> that is the dimension + of music chunks. After each chunk there will be a byte containing the lenght of the fol +lowing metadata. If this is 0 it means no metadata will follow, but if it is a number a + correnspondant number of bytes have to be read to have the metadata back, typically t +he title and the author. The problem is that the title will arrive when the song already starte +d, so I decided to add a cache (see C<--extraparts> argument) to append and prepend chunc +ks to songs. This way you will have probably unneeded data at start and at the end +of each file but for sure the entire song. Let's say Icy-MetaData is 5 (generally is 16k), you have a situation l +ike ( '=' it's a chunk): -unknown song(1)------ -------------- The Clash - Loose This Skin - +------ ... | | | | STREAM-> = = = [0] = = = = = [3][*][*][*] = = = = = [0] = = = = = [0 +] = = = ... | | | | | | | | | + | unknown song | new song | | | | ------ The Clash - Loo +se This Skin | | | | | empty metadata | ------------- metadata with new tit +le | length of metadata (1) about unknown song: probably you never get an unknown song: I su +spect that ICY protocol will send icy metadata as first part of a brand new response. =head1 REFERENCES See the original post by qbxk at L<perlmonks|https://www.perlmonks.org +/index.pl?node_id=534645> L<a post about ICY protocol|https://stackoverflow.com/questions/491106 +2/pulling-track-info-from-an-audio-stream-using-php/4914538#4914538> L<The ICY protocol explained|http://www.smackfu.com/stuff/programming/ +shoutcast.html> L<A very datailed tutorial|https://thecodeartist.blogspot.com/2013/02/ +shoutcast-internet-radio-protocol.html> L<a not complete but useful description of ICY|https://www.radiotoolbo +x.com/community/forums/viewtopic.php?t=74> L<a technical article about streaming networks|https://people.kth.se/~ +johanmon/dse/casty.pdf> =head1 AUTHOR This program is by Discipulus as found in perlmonks.org with the fund +amental inspiration of the above mentioned qbxk This program is licensed under the same terms of the Perl languange.

    L*

    There are no rules, there are no thumbs..
    Reinvent the wheel, then learn The Wheel; may be one day you reinvent one of THE WHEELS.

Add your CUFP
Title:
CUFP:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":


  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.