Beefy Boxes and Bandwidth Generously Provided by pair Networks
Don't ask to ask, just ask

Cool Uses for Perl

( #1044=superdoc: print w/replies, xml ) Need Help??

This section is the place to post your general code offerings -- everything from one-liners to full-blown frameworks and apps.

CUFP's -- cron next appointments
1 direct reply — Read more / Contribute
by Discipulus
on Nov 27, 2018 at 07:56
    Hello nuns and monks!

    dont know if is so cool.. anyway I was playing with cron entries while I discovered Algorithm::Cron and it's cool method next_time that suddenly provoked a oneliner moment: perl -MAlgorithm::Cron -E "say scalar localtime (Algorithm::Cron->new(base => 'local',crontab => $ARGV[0])->next_time(time)) " "0 9 12 4 *" which prints Fri Apr 12 09:00:00 2019

    But this was not cool enough. The following program by other hand parse a cron file or the output of cron -l or some input pasted and shows an ordered list of next commands cron will run.

    It accepts input from a pipe, from a file with -f filename and if nothing was given it expects some input to be pasted followed by CTRL-Z or CTRL-D (windows or linux).

    Instead of the above input you can use -c "crontab-entry" to parse a solitary crontab entry.

    And dulcis in fundo, with the -n N parameter the program will show next N occurrences of the scheduled programs

    use strict; use warnings; use Getopt::Long; use Algorithm::Cron; my $file; my $howmany; my $crontab; my $help; my @lines; my $helptext = "USAGE:\n $0 [ [-f filename | -c STRING] -n N]\n\n". " $0 -f filename\n". " $0 -f filename -n 2\n". " crontab -l | $0 \n". " crontab -l | $0 -n 2\n". " cat filename | $0\n". " cat filename | $0 -n 3\n". " $0 -c 'crontab entry'\n". " $0 -c 'crontab entry' -n 5\n". " $0 (paste some content followed by CTRL-D or CTRL-Z on +a newline)\n". " $0 -n 4 (paste some content followed by CTRL-D or CTRL- +Z on a newline)\n". " $0 -h (print this help)\n"; GetOptions ("f|file=s" => \$file, "n=i" => \$howmany, "c|crontab=s" => \$crontab, "h|help" => \$help) or die( $helptext ); print $helptext and exit if $help; if ( $crontab ) { @lines = $crontab } elsif ( $file and -e -f -r $file){ open my $fh, '<', $file or die; @lines = <$fh>; } else{ @lines = <> } foreach my $line ( sort { Algorithm::Cron->new( base => 'local', crontab => join' ',(split /\s+|\t/,$a)[0..4])->next_ti +me(time) <=> Algorithm::Cron->new( base => 'local', crontab => join' ',(split /\s+|\t/,$b)[0..4])->next_ti +me(time) } grep { /^(\d|\*)/ } @lines ){ my @parts = split /\s+|\t/,$line; my $now = time; my $repeat = $howmany; print scalar localtime ( Algorithm::Cron->new( base => 'local', crontab => join' ', @parts[0..4])->next_ti +me($now) ); print " => ( @parts[0..4] )",($crontab ? "" : " => @parts[5..$#par +ts]"),"\n"; if ( --$repeat ){ while( $repeat > 0){ $now = Algorithm::Cron->new( base => 'local', crontab => join' ', @parts[0..4])->next_time( +$now ); print scalar localtime ( Algorithm::Cron->new( base => 'lo +cal', crontab => join' ', @parts[0..4])->next_ti +me($now) ); print "\n"; $repeat--; } } }

    given a sample file the following example show the usage:

    cat crontab.txt # Crontab Environmental settings SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root 00 3 * 7 0 /path/to/command #15 20 * 1-7 * /path/to/command2 */30 7,21 1-15 1 * /path/to/another/command # m h dom mon dow user command 21 * * * * root cd / && run-parts --report /etc/cron.hourly 0,30 6 * * * root test -x /usr/sbin/blah #47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-pa +rts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/blahblah || ( cd / && run-pa +rts ) cat crontab.txt | perl Tue Nov 27 14:21:00 2018 => ( 21 * * * * ) => root cd / && run-parts - +-report /etc/cron.hourly Wed Nov 28 06:00:00 2018 => ( 0,30 6 * * * ) => root test -x /usr/sbin +/blah Sat Dec 1 06:52:00 2018 => ( 52 6 1 * * ) => root test -x /usr/sbin/b +lahblah || ( cd / && run-parts ) Tue Jan 1 07:00:00 2019 => ( */30 7,21 1-15 1 * ) => /path/to/another +/command Sun Jul 7 03:00:00 2019 => ( 00 3 * 7 0 ) => /path/to/command cat crontab.txt | perl -n 3 Tue Nov 27 14:21:00 2018 => ( 21 * * * * ) => root cd / && run-parts - +-report /etc/cron.hourly Tue Nov 27 15:21:00 2018 Tue Nov 27 16:21:00 2018 Wed Nov 28 06:00:00 2018 => ( 0,30 6 * * * ) => root test -x /usr/sbin +/blah Wed Nov 28 06:30:00 2018 Thu Nov 29 06:00:00 2018 Sat Dec 1 06:52:00 2018 => ( 52 6 1 * * ) => root test -x /usr/sbin/b +lahblah || ( cd / && run-parts ) Tue Jan 1 06:52:00 2019 Fri Feb 1 06:52:00 2019 Tue Jan 1 07:00:00 2019 => ( */30 7,21 1-15 1 * ) => /path/to/another +/command Tue Jan 1 07:30:00 2019 Tue Jan 1 21:00:00 2019 Sun Jul 7 03:00:00 2019 => ( 00 3 * 7 0 ) => /path/to/command Sun Jul 14 03:00:00 2019 Sun Jul 21 03:00:00 2019 perl -c "2-5 9 12 4 *" -n 4 Fri Apr 12 09:02:00 2019 => ( 2-5 9 12 4 * ) Fri Apr 12 09:03:00 2019 Fri Apr 12 09:04:00 2019 Fri Apr 12 09:05:00 2019

    have fun!


    There are no rules, there are no thumbs..
    Reinvent the wheel, then learn The Wheel; may be one day you reinvent one of THE WHEELS.
GeoIP revisited
3 direct replies — Read more / Contribute
by cavac
on Nov 23, 2018 at 07:35

    A couple of days ago i looked into GeoIP in Perl+PostgreSQL+GeoIP = Awesome!. Since then i have learned that the GeoIP lists i was using were out of support. The new public lists are in a new format. So i took the opportunity to rewrite the whole thing and do pretty much everything with Perl, not using external commands like "unzip" and "wget". This should make things a bit more portable.

    I'm sorry, it isn't written "nice" and isn't really documented. I designed it as a cron job for a single private server ;-)

    I'm still calling the perl interpreter from a bash script so i can set the correct environment variables and stuff. But it's a lot smaller now:

    #!/usr/bin/env bash . ~/.bashrc_activestate cd /home/myuser/src/geoip perl

    The database tables stays exactly the same as in the last post, here again for reference:


    And here is the new all-in-one script:

    This script uses the newish "GeoLite2" databases from MaxMind. If you use them, please make sure you comply to the open source licensing stated on their official page

    perl -e 'use MIME::Base64; print decode_base64("4pmsIE5ldmVyIGdvbm5hIGdpdmUgeW91IHVwCiAgTmV2ZXIgZ29ubmEgbGV0IHlvdSBkb3duLi4uIOKZqwo=");'
Perl+PostgreSQL+GeoIP = Awesome!
2 direct replies — Read more / Contribute
by cavac
on Nov 21, 2018 at 04:23

    EDIT: WARNING, THIS USES A LEGACY DATABASE THAT IS NOT UPDATED ANYMORE. Please take a look at GeoIP revisited for an updated version that uses an up-to-date version of the MaxMind GeoIP database.

    Sometimes you have to work with GeoIP, e.g. mapping an IP address to the origin country. Be it for legal reasons (geoblocking) or just so you know where your target audience is coming from.

    You could just make online lookups for every request. But if you are running a PostgreSQL database backend anyway, there is a simple way to do it in DB, since PostgreSQL supports a CIDR column type.

    First, let us define a database table:


    Next, we need a bash script we can run from crontab for our daily update:

    Of course, now that the up-to-date geoip lists are in the database, it's even possible to use an ON INSERT OR UPDATE trigger to any table that needs geoip data. But that i will leave as an excercise for the reader...

    perl -e 'use MIME::Base64; print decode_base64("4pmsIE5ldmVyIGdvbm5hIGdpdmUgeW91IHVwCiAgTmV2ZXIgZ29ubmEgbGV0IHlvdSBkb3duLi4uIOKZqwo=");'
Rough Delta-V estimate for Hohmann transfer orbits
No replies — Read more | Post response
by cavac
on Nov 20, 2018 at 06:30

    I've been interested in space stuff as far back as i can remember. Quite a while back, Scott Manley had a series on the Youtubes explaining how to calculate orbital changes and i implemented this in Javascript on my Blog.

    But i always wanted a command line tool in Perl, so lets get to implementing it. It's not the nicest piece of code and it isn't optimizing the transfer burn (check the different options when to do the burns to minimize Delta-V), nor does it try for things like three or (more) burns/multiple intermediate orbits. But it gives a rough idea of "how bad did they miss the target orbit".

    Here is the code (long, so in a "readmore" tag):

    The basic commandline is perl 1000/e0.5/0 1000/1000/10

    First argumnent is the source orbit (initial orbit), the second is the target orbit. You can define each orbit in two ways, either using periapsis/apoapsis/inclination or semimajor-axis/eccentricity/inclination. All measurements in kilometers above earth surface. This example uses both:

    1. Source orbit of 1000/e0.5/0: Semi major axis of 1000km, "e" defined eccentricity mode with an eccentricity of 0.5, and an inclination of 0 degrees. Internally, this gets converted to a periapsis of 500km and an apoapsis of 1500km. Could have also been written as 500/1500/0
    2. Target orbit of 1000/1000/10: Circular orbit of 1000km with a 10 degree inclination. Could have also been written as 1000/e0/10.

    Note: when using ap/per mode, don't worry about using the wrong order, it gets sorted out internally.

    And here is the result of our calculation:

    Constants: Earth radius: 6371000 meter Earth mass: 5.97219e+24 kg Gravity: 6.67384e-11 mu: 398574405096000 Source orbit is using eccentricity notation, converting... Converting to meter... Adding earth radius... Orbital parameters for Source orbit: Periapsis: 6871000 m Apoapsis: 7871000 m Inclination: 0 deg Converting to meter... Adding earth radius... Orbital parameters for Target orbit: Periapsis: 7371000 m Apoapsis: 7371000 m Inclination: 10 deg Calculating semi major axis... Defining Hohman orbit... Orbital parameters for Hohman orbit: Periapsis: 6871000 m Apoapsis: 7371000 m Inclination: 0 deg Calculating first burn... Data for burn at source orbit periapsis: Speed before burn: 7870.39409917514 m/s Speed after burn: 7748.85334888779 m/s Required Delta-V: 121.540750287353 m/s Calculating second burn... Need to include plane change in second burn... Data for burn at Hohman orbit apoapsis: Speed before burn: 7223.22227109049 m/s Speed after burn: 7353.45599234394 m/s Inclination before burn: 0 deg Inclination after burn: 10 deg Inclination change: 10 deg Required Delta-V: 1277.04850388302 m/s Total Delta-V requirement: 1398.58925417037 m/s

    Yeah, 1.3km/s required Delta-V. Not good. Better call our Kerbals to build us a new sat, this one isn't going to make it...

    perl -e 'use MIME::Base64; print decode_base64("4pmsIE5ldmVyIGdvbm5hIGdpdmUgeW91IHVwCiAgTmV2ZXIgZ29ubmEgbGV0IHlvdSBkb3duLi4uIOKZqwo=");'
curl2lwp - convert Curl command line arguments to LWP / Mechanize Perl code
1 direct reply — Read more / Contribute
by Corion
on Nov 14, 2018 at 13:07

    After inspiration by Your Mother and some slight hacking, I'm very proud to announce HTTP::Request::FromCurl, together with its companion online site at The module and included curl2lwp program allow you to easily convert curl command lines to Perl code that uses LWP::UserAgent resp. WWW::Mechanize.

    Some code: -X GET -A xpcurl/1.0

    outputs the following code:

    my $ua = WWW::Mechanize->new(); my $r = HTTP::Request->new( 'GET' => '', [ 'Accept' => '*/*', 'Host' => '', 'User-Agent' => 'curl/1.0', ], ); my $res = $ua->request( $r, );

    The online version creates a bit more code, as the output there is likely not consumed by advanced Perl programmers.

    The module parses a subset of the valid curl command lines and generates equivalent code for LWP::UserAgent for it. Support for other HTTP user agents (Mojo::UserAgent, AnyEvent::HTTP, HTTP::Future) is not yet implemented but I welcome contributions there.

    The app driving the online interface is not yet released onto CPAN, but as it is mostly HTML scaffolding and some Javascript, it should be released fairly soon.

Google API Browser
No replies — Read more | Post response
by localshop
on Nov 12, 2018 at 09:42

    As I continue my pilgrimage to becoming passably proficient with Mojo and Google Cloud Services I have been tinkering away with WebService::GoogleAPI::Client and as a working example I was reasonably happy with the ease with which I could produce a basic Google API Explorer that presents the method and parameters of all the Google Discoverable API Endpoints. This is proving a handy starting point to constructing working examples accessing the APIS.

    I plan to extend this to firstly include required scopes, then provide OpenAPI YAML and perhaps ultimately replicate many of the features of Google's API Explorer.

    You can see the Mojo Application running as a Hypnotoad socket served application under CPANEL/WHM hosted environment at

    Today I'm working on the Google Drive API Example available in the Github Repo as a demo of an alternative approach to using a dedicated CPAN module such as the just released Net::Google::Drive

    If anybody has any interesting use cases requiring access to Google Cloud Services let me know. I'm trying to add a new example every few days.

Binary vs. linear search
3 direct replies — Read more / Contribute
by reisinge
on Nov 12, 2018 at 07:46

    I was trying to get my head around the binary search algorithm. I did it by comparing it to the linear search algorithm

    #!/usr/bin/perl use warnings; use v5.14; # Find this word ... my $find = shift // ""; # ... in this sorted list of words ... my @words = qw(alpha bravo charlie delta echo foxtrot golf hotel india juliett k +ilo lima mike november oscar papa quebec romeo sierra tango uniform v +ictor whiskey xray yankee zulu); # ... using two search algorithms my %search = ( linear => \&linsearch, binary => \&binsearch, ); for my $alg ( sort keys %search ) { say "$alg searching '$find' in [@words] ..."; my $idx = $search{$alg}->( $find, \@words ); say defined $idx ? "found at index $idx" : "not found"; say ""; } sub binsearch { my ( $find, $array ) = @_; my $low = 0; my $high = @$array - 1; while ( $low <= $high ) { my $try = int( ( $low + $high ) / 2 ); say "--> trying at index $try"; $low = $try + 1, next if $array->[$try] lt $find; $high = $try - 1, next if $array->[$try] gt $find; return $try; } return; } sub linsearch { my ( $find, $array ) = @_; for ( my $i = 0 ; $i < @$array ; $i++ ) { my $try = $i; say "--> trying at index $try"; if ( $array->[$try] eq $find ) { return $try; } } return; }
    Genius is 1 percent inspiration and 99 percent perspiration. -- Thomas Edison
Achievements Steaming ahead
No replies — Read more | Post response
by GrandFather
on Nov 12, 2018 at 01:09

    I play a bit of Civ V. Like many games these days I downloaded it using Steam and one of the "features" of the system is that Steam keeps track of various in game achievements. So, silly me, I've been sucked into playing the "get the achievements" meta game.

    Which is all very fine, but Steam doesn't do a great job of showing you the achievements. It doesn't seem able to sort them in the order that they have been achieved so sometimes it can be hard to know if you got that last thing or not. So, Perl to the rescue! The following script parses the Copy & Pasted achievements list from the Steam page and sorts them, first list the "not yet achieved" entries, then the achieved items sorted by the order that they were achieved.

    use strict; use warnings; my %monthOrd = ( Jan => 1, Feb => 2, Mar => 3, Apr => 4, May => 5, Jun => 6, Jul => 7, Aug => 8, Sep => 9, Oct => 10, Nov => 11, Dec => 12, ); my @records = grep {/\n/ and /[^\n]/} do {local $/ = "\n\n"; <DATA>}; s/^\n+|\n$//gs for @records; for my $record (@records) { my @lines = split "\n", $record; $lines[0] //= ''; my ($day, $month, $year, $time) = $lines[0] =~ /(\d+)\s+(\w+)(?:, +(\d+))?\s+\@\s+(\S+)/; if(defined $month && exists $monthOrd{$month}) { $year ||= 2018; $lines[0] = sprintf "%04d %2d %2d %7s", $year, $monthOrd{$mont +h}, $day, $time; } else { unshift @lines, ''; } $record = \@lines; } # Sort records by not achieved first. Remove blank lines from records @records = map {$_ = [grep {$_} @$_]; $_} sort {$a->[0] cmp $b->[0]} @ +records; print join "\n", @$_, '', '' for @records; __DATA__ Unlocked 18 Feb, 2016 @ 11:39pm First in the Hearts of Your Countrymen Beat the game on any difficulty setting as Washington. Unlocked 2 Jun, 2016 @ 7:48pm Video et Taceo Beat the game on any difficulty setting as Elizabeth. Unlocked 18 Nov, 2017 @ 4:57pm Vivre La Revolution Beat the game on any difficulty setting as Napoleon. Unlocked 25 Apr, 2016 @ 9:44am Blood and Iron Beat the game on any difficulty setting as Bismarck. Red Badge of Courage Win the Civil War scenario on Deity. Pickett's Recharge Capture Gettysburg with a Confederate Infantry unit possessing the Geo +rge Pickett promotion. Sheridan's Valley Campaign As Union, control Winchester, Front Royal, Harrisonburg, Staunton, and + Lynchburg.


    Red Badge of Courage Win the Civil War scenario on Deity. Pickett's Recharge Capture Gettysburg with a Confederate Infantry unit possessing the Geo +rge Pickett promotion. Sheridan's Valley Campaign As Union, control Winchester, Front Royal, Harrisonburg, Staunton, and + Lynchburg. 2016 2 18 11:39pm First in the Hearts of Your Countrymen Beat the game on any difficulty setting as Washington. 2016 4 25 9:44am Blood and Iron Beat the game on any difficulty setting as Bismarck. 2016 6 2 7:48pm Video et Taceo Beat the game on any difficulty setting as Elizabeth. 2017 11 18 4:57pm Vivre La Revolution Beat the game on any difficulty setting as Napoleon.
    Optimising for fewest key strokes only makes sense transmitting to Pluto or beyond
Generating SVG Badges with Perl
No replies — Read more | Post response
by haukex
on Nov 10, 2018 at 09:20

    If you've looked at projects on GitHub or other sites, you may have noticed the little rectangular badges (aka shields) saying things like "build passing" with a nice green background (or yellow/red if something is wrong), and if you've used Travis CI or Coveralls, you probably know how to get these badges. There is also a nice service that can generate these badges for you,, which is written in JavaScript.

    There didn't seem to be anything like that for Perl, so I wrote Badge::Simple!

    Thanks to a super quick turnaround by Kenichi Ishigaki, these badges are already integrated into CPANTS: Try accessing or and you should see a badge for your Kwalitee score (PNG badges have also been available for quite some time, just use .png instead of .svg).

    In the respository, you can find the script, which you can use to generate CPAN Testers badges for your CPAN modules. There is an open issue to perhaps integrate this directly with CPAN Testers, if anyone would like to take a stab at that, please feel free :-)

    You can see all this in action, for example, in the Badge-Simple readme: the "Kwalitee" and "CPAN Testers" badges were generated with Perl!

[WEBPERL] dynamically importing non-bundled modules via http
2 direct replies — Read more / Contribute
by LanX
on Nov 09, 2018 at 10:07
    (in continuation to webperl: fetching data over the web)

    Webperl requires at the moment to statically bundle all needed modules.

    The following Proof of Concept shows how to dynamically use non-bundled modules.

    The modules need to be pure Perl and have to be present inside the $WEBLIB directory on your server (respecting the same domain policy avoids much trouble)

    I just copied the desired libs from my installation and listed allowed modules to %INC_FETCH (to limit unnecessary traffic) .

    The following page demonstrate how to use Data::Dump from weblib.

    Data::Dump is currently not bundled with Web-Perl. (Data::Dumper is since it's core)

    The mechanism of adding a call-back to @INC to dynamically fetch modules from various sources is described in require

    <!doctype html> <html lang="en-us"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>WebPerl &lt;script&gt; Demos</title> <script type="text/javascript"> function xfetch(url) { var req = new XMLHttpRequest();'GET', url , false); req.send(null); if(req.status == 200) { return req.responseText; } else { return ""; } } // alert(xfetch('http://localhost:5000/lib/Data/')); // alert(xfetch('http://localhost:5000/webperl.js')); </script> <script src="webperl.js"></script> <!-- Please see the documentation at +tml --> <script type="text/perl"> use warnings; use strict; use Data::Dumper; use WebPerl qw/js/; BEGIN { my $WEBLIB = 'localhost:5000/lib'; # gather source of module my $fetch = sub { my ($module_path) = @_; return js("window.xfetch('http://$WEBLIB/$module_path')"); }; # allowed modules in weblib my %INC_FETCH = ( 'Data/' => 1, ); # loader hook for @INC my $loader = sub { my ($sub,$filename) = @_; if ( $INC_FETCH{$filename}) { my $source = $fetch->($filename); unless ($source) { warn "Fetching $filename from $WEBLIB failed"; return; } open my $fh_source, "<", \$source; my $pre = ''; #$pre = qq{warn '*** Loading $filename ***';}; return (\$pre, $fh_source); } return; }; push @INC, $loader; } use Data::Dump qw/pp/; my $HoA = { map { $_ => [reverse 1..5] } "a".."d" }; warn pp $HoA; #use Data::Dumper; #warn Dumper $HoA; </script> <!-- Optional STDOUT/STDERR text area (if you don't use this, output g +oes to Javascript console) --> <script> window.addEventListener("load", function () { document.getElementById('output') .appendChild( Perl.makeOutputTextarea() ); }); </script> </head> <body> <p>This is a demo of <a href="" target="_blan +k">WebPerl</a>!</p> <div id="output"></div> <div id="buttons"> <button id="my_button">Testing!</button> </div> </body> </html>


    { a => [5, 4, 3, 2, 1], b => [5, 4, 3, 2, 1], c => [5, 4, 3, 2, 1], d => [5, 4, 3, 2, 1], } at /tmp/ line 56.

    DISCLAIMER: This code is beta and follows the release often paradigm.

    Successfully tested with Chrome AND Firefox. FF showed new "Content Security Policy" problems, I didn't have the time to dig into and install the necessary PLACK CORS modules. °

    The principle is universal, as soon as webperl can run in a browser and has the capacity to dynamically fetch code, using unbundled (pure) Perl modules becomes trivial.

    Cheers Rolf
    (addicted to the Perl Programming Language :)
    Wikisyntax for the Monastery FootballPerl is like chess, only without the dice

    °) works like a charm in FF , forgot to disable "noscript" filtering for localhost! ;)

read an mp3 stream and save tracks using icy-metaint (ICY protocol)
No replies — Read more | Post response
by Discipulus
on Nov 08, 2018 at 03:12
    Hello folks!

    All begun when I discovered that my shoutcast bookmark was no more opening the usual list of internet strreaming ordered by genre..

    I said to myself I know a bit of Perl.. I'll do my own way.. and I started investigating mp3 streams (fulling my screen of binary #*%_ many times ;) and even if I've not reached my original goal I ended with someting useful and cool

    The program will listen an mp3 stream and save each song with the correct title, skipping advertisements if possible and doing it's best to produce an entire song in each file

    The POD documentation describes what I have understood of the ICY protocol (undocumented).

    Suggestions and improvements welcome, have fun!

    use strict; use warnings; use Net::HTTP; use URI; use File::Spec; use Getopt::Long; $|++; my $VERSION = 23; # SOME DEFAULTS my $debug = 0; # --extraparts default value my $cache_size = 2; my $url; my $agent = 'Stream-Cutter-v'.$VERSION; unless ( GetOptions ( 'url=s' => \$url, 'agent=s' => \$agent, 'extraparts|cache=i' => \$cache_size, 'debug=i' => \$debug, + )) {die "$0 -url URL [-agent STRING -extraparts N -debug [0-2] +]"} unless ( $url ){ print "which URL you want to open?"; $url = <STDIN>; chomp $url; } # OTHER VARIABLES # chunk number for debug purpose my $num = 0; # cache used to have more chunks wrote to a file when new song starts my @cache; # used to append to previous file # how_many parts will be equal to $cache_size when new song begin my %previous_file = ( name => undef, how_many => 0); my ( $socket, $icymetaint ) = open_connection( $url ); die "unable to get icy-metaint!" unless defined $icymetaint and $icymetaint > 0; read_stream( $socket, $icymetaint ); ###################################################################### +######### sub open_connection { my $url = shift; my $uri = URI->new( $url ); my $sock = Net::HTTP->new( Host => $uri->host, PeerPort => $uri->port, ) or die $@; $sock->write_request( GET => $uri->path, 'User-Agent' => $agent, # very important: ask for metadata! 'Icy-MetaData' => 1 ) or die $@; my ($http_code, $http_mess, %headers) = $sock->read_response_heade +rs; print join ' ', "\nConnecting to:\t",$uri->as_string, "\nStatus:\t",$http_code,$http_mess,"\n"; # go on if everything is OK 200 if ( $http_code == 200){ # grab useful headers and set them to empty string if undefine +d map {$headers{$_} = $headers{$_} // ''} 'Server','icy-name',' +icy-name', 'icy-genre','icy-br'; print join "\n","Server:\t".$headers{'Server'}, "name:\t".$headers{'icy-name'}, "genre:\t".$headers{'icy-genre'}, "byte rate:\t".$headers{'icy-br'}."kb/s\n\n"; if ( $debug ){ print "HEADERS:\n", (map {qq(\t$_\t=>\t$headers{$_}\n)} grep{defined $headers{$_}} %headers),"\n\n"; } return ($sock, $headers{'icy-metaint'}); } # return undef if not OK 200 else { print "Errors opening the given site..\n"; return undef; } } ###################################################################### +######### sub read_stream { my ($socket, $metaint) = @_; # output filehandle my $out; my $new_metadata; my $file_name; while( 1 ) { my $buffer; # READ the chunk of music $socket->read($buffer, $metaint); # CHECK for new metadata if ( $new_metadata = read_meta($socket)){ # WRITE and get back the NEW filehadle $out = write_stream( $buffer, $out, $new_metadata ); + } else{ # WRITE and get back the OLD filehadle $out = write_stream( $buffer, $out ); } } } ###################################################################### +######### sub read_meta{ my $socket = shift; my ( $metalen, $metabyte); $socket->read($metabyte, 1); $metalen = unpack("C",$metabyte) * 16; if( $metalen > 0) { # We have NEW metadata! JOY print "[$metalen metadata] " if $debug > 1; my $metadata; $socket->read($metadata, $metalen); $metadata = unpack("A$metalen", $metadata); print "\nMETADATA: [",$metadata,"]\n" if $debug > 1; return $metadata; } else { return undef; } } ###################################################################### +######### sub write_stream{ my ($buf, $out, $new_metadata) = @_; # count the overall chunk count for debug purpose $num ++; # NEW song got from metadata if ( $new_metadata ){ my $track_name = $1 if $new_metadata =~ /^StreamTitle='([^ +;]*)';/i; # if StreamTitle is empty probably is an advertisement. Fo +re example: # METADATA: [StreamTitle='';StreamUrl='';adw_ad='true'; # durationMilliseconds='20009';adId='12161';insertionType= +'preroll'; print "\ncurrently playing:\t". ($track_name ? $track_name : '**advertisement**'). +"\n"; if ($out and fileno $out and $cache_size){ print "writing part number [$num] to current file\n" i +f $debug; # DOUBLE write of the current buff print $out $buf ; } my $file_name; ($file_name = $track_name) =~ s/\s+/_/g; $file_name =~ s/\/\\:\*\?\"<>\|//g; $file_name.='.mp3'; # if StreamTitle is empty probably is an advertisement $file_name = File::Spec->devnull() unless $track_name; # set previous filename, but still how_many = 0 $previous_file{name} = $file_name; # the new file open $out, '>', $file_name or die "unable to write to $fil +e_name!"; binmode $out; if ( $cache_size > 0 ){ # PREPEND cache items to the new opened file while ( my $cache_item = shift @cache ) { print "writing cached part to new file: $file_name +\n" if $debug; print $out $cache_item; } } # WRITE $buf to a new file print "writing part number [$num] to new file: $file_name\ +n" if $debug; print $out $buf; } # no new track.. else { print "$num " if $debug > 1; # WRITE $buf to the already opened file if ( $out and fileno $out ){ print $out $buf or die; } # check previous_file if needed to be appended if ( $previous_file{name} and $previous_file{how_many} ){ print "appending part to previous file too\n" if $debug; open my $oldfh, '>>', $previous_file{name} or die "unable to open $previous_file{name} in ap +pend mode!"; binmode $oldfh; print $oldfh $buf or die "unable to write!"; close $oldfh or die "unable to close filehandle!"; $previous_file{how_many}--; } else{ $previous_file{name} = undef; $previous_file{how_many} = $cache_size ; } } # cache rotates.. if ( $#cache == $cache_size - 1 ){ shift @cache, } push @cache, $buf; # return the current file handle return $out; } __DATA__ =head1 NAME C<> This program open an mp3 stream and save songs to distinct files. It's + intended to understand the ICY protocol and not intended to save copirighted da +ta. =head1 SYNOPSIS -url URL [-agent STRING -extraparts N -debug 0- +2] --url URL is the only necessary argument. Url must be complete of the protoc +ol --agent STRING you can use a custom user-agent to send to server during the conne +ction. Agent defaults to Stream-Cutter-v with the version number of the p +rogram appended. You can find useful to use the string WinampMPEG/2.9 if +refused by some server --extraparts N This parameter governs how many extra parts of the stream have to +be prepended to a new file (via cache) and appended to the previous file (via reopening and appending). --extraparts defaults to 2 that is the b +est I found to have an entire song to the correct file and not to much junk in + it (parts of other songs). --cache is an alias for --extraparts --debug 0-2 With -debug 0 only few details of the server and the title of the +current song will be displayed. With -debug 1 also headers received from the server are shown and +all operations involving new files creation and extra parts possibly (see --extra +parts) wrote to these files Debug level 2 will display also each metadata received (if it cont +ains data) and a progressive number for each chunk of music received =head1 DESCRIPTION This program was inspired by a post wrote by qbxk for perlmonks (see r +eferences). The core part of the program is just a rewrite of the original code by + qbxk The ICY protocol is not well documented. It's build on top of the HTTP + one. This program can help you to understand it in a better way. Basically music + chunks are intercalated with metadata chunks at the position given by the C<icy-m +etaint> header value. At this position you will find a lone byte indicating the lengt +h of the following metadata. If this byte is not 0 but N, then the following N +bytes will be of metadata. Normally in the metadata you find the C<StreamTitle> cont +aining the title of the current song. You can also find the C<StreamUrl> generally empt +y and other things like C<adw_ad> related to advertisements, followed by the duration of +the advertisement and other characteristics of the advertisement. So a typical chunk of metadata for a new song in the stream will be li +ke: C<StreamTitle='Beethoven - Goldberg Variations';StreamUrl='';> or sometimes just like: C<StreamTitle='The Clash - Loose this skin';> without the C<StreamUrl> part, while an advertisemente will look like: C<StreamTitle='';StreamUrl='';adw_ad='true';durationMilliseconds='2000 +9';adId='12161';insertionType='preroll';> The current version of the program will try to skip advertisements che +cking for empty C<StreamTitle> and then using C<File::Spec>'s C<devnull()> a +s filename to save the stream. In the headers of the HTTP request you had to ask for C<Icy-MetaData>, + then the server will answer with various icy headers, notably C<icy-metaint> that is the dimension + of music chunks. After each chunk there will be a byte containing the lenght of the fol +lowing metadata. If this is 0 it means no metadata will follow, but if it is a number a + correnspondant number of bytes have to be read to have the metadata back, typically t +he title and the author. The problem is that the title will arrive when the song already starte +d, so I decided to add a cache (see C<--extraparts> argument) to append and prepend chunc +ks to songs. This way you will have probably unneeded data at start and at the end +of each file but for sure the entire song. Let's say Icy-MetaData is 5 (generally is 16k), you have a situation l +ike ( '=' it's a chunk): -unknown song(1)------ -------------- The Clash - Loose This Skin - +------ ... | | | | STREAM-> = = = [0] = = = = = [3][*][*][*] = = = = = [0] = = = = = [0 +] = = = ... | | | | | | | | | + | unknown song | new song | | | | ------ The Clash - Loo +se This Skin | | | | | empty metadata | ------------- metadata with new tit +le | length of metadata (1) about unknown song: probably you never get an unknown song: I su +spect that ICY protocol will send icy metadata as first part of a brand new response. =head1 REFERENCES See the original post by qbxk at L<perlmonks| +/> L<a post about ICY protocol| +2/pulling-track-info-from-an-audio-stream-using-php/4914538#4914538> L<The ICY protocol explained| +shoutcast.html> L<A very datailed tutorial| +shoutcast-internet-radio-protocol.html> L<a not complete but useful description of ICY|https://www.radiotoolbo> L<a technical article about streaming networks| +johanmon/dse/casty.pdf> =head1 AUTHOR This program is by Discipulus as found in with the fund +amental inspiration of the above mentioned qbxk This program is licensed under the same terms of the Perl languange.


    There are no rules, there are no thumbs..
    Reinvent the wheel, then learn The Wheel; may be one day you reinvent one of THE WHEELS.
Migrate phone numbers from Nokia 6303c to Nokia 216 then to CAT B30, or the quest for a sane phone
2 direct replies — Read more / Contribute
by ambrus
on Nov 06, 2018 at 17:23

    For a very long time, I've been using a Nokia 6303 classic phone. I was very satisfied with that phone. the built-in camera made good photos, I could use the music player to listen to music through headphones, and the user interface was usable.

    You may recall that back in 2016, I even wrote a perl script to decode the contact lists after that phone backs it up into a zip file (with .NBF extension) containing indivdual files for each contact. That script exports phonebook entries into a semicolon-separated file with lines like this:

    BME központi;+3614631111

    That's a simple entry. Some lines list additional data, such as multiple phone numbers and possibly text notes in the same entry separated by a semicolon. I never figured out how to make a backup file that the phone could import though. In fact I once had to restore all the backups by hand.

    In 2017-11, I bought a Nokia 216 as a spare phone, because I figured that if I lose my phone or it becomes non-functional, I'd like to have a spare phone at hand immediately. That one only has a much worse camera, but that didn't matter anymore, because I had a pretty good compact camera now. I charged the phone and verified that it worked, then put it in a drawer.

    In 2018-08, I lost the Nokia 6303. I cleaned it with too much water, which in itself wouldn't have been a problem, but I then put the battery back in the phone before it dried properly. The phone turned on, but went off after a few seconds, and I couldn't revive it after drying. I'd like to add that this was the second time the phone got wet, it has survived falling into the toilet once before.

    So I mourned for the old phone, but was happy that I had the foresight to have bought a replacement earlier. For a few days.

    I actually also had the foresight to have most of the important phone numbers copied to the SIM card, so I could transfer those phone numbers to the new phone by copying from that, and entered a few more important ones from the dump of the backup, so I had like fifty important phone numbers in the phone. You may ask why I don't just transfer all phone numbers through SIM cards then, since SIM cards are pretty cheap, and I have several old spare ones in my drawer. The problem is that the contact list stored on SIM cards has some big limitations: names can't be longer than about 15 bytes (some characters take more than one byte, I don't know the exact rule), the card can only store 250 contacts (I already had more than that back then), and the card can't store additional information such as notes.

    Anyway, I at least got a phone that I could use for a temporary basis, and transfering the whole contact list was something that could wait a few days. But since I actually tried to use the phone for other tasks, and it turned out to be a disaster.

    It only took a few days to find out how terrible the user interface of the Nokia 216 was. How I raged!

    There are only three good things I can say about the Nokia 216: it generally reacts quickly enough to keypresses, it accepts two SIM cards, and it's possible to import a contact list prepared on a computer.

    Here's how I imported the contact list. After finding out the limitations of the phone, First, I edited the semicolon-separated backup file to shorten the names and otherwise clean up the list. While I was there, I fixed all the names to have the correct letters, because some of the entries actually had names inherited from one of the two even older phones, which didn't have a full character set, so they had characters like "ŕ" instead of "á" and "ö" instead of "ő" I never bothered to fix that on the 6303, even though that one already supported all letters of Hungarian, and I entered all the new names with the correct characters. Then I used a messy perl script to verify that the list of contacts looks fine, and convert them to the format that the 216 accepts, which I could reverse engineer from an exported contact list in a few tries. Here's the code, with a few details omitted.

    This emits a single contacts file in vcard format with entries like this.


    Two phone numbers in the same entry are handled by writing it as two entries, but extra notes are discarded.

    You must copy that file to the SD card (either with an SD card reader or through a USB cable) as "/Backup/backup.dat", then restore the backup in the settings. Note that this will erase the existing contacts in the memory of the phone (but not the SIM card).

    Anyway, I eventually set off to try to buy a better phone. That's not easy. It's hard to find reviews of cheap feature phones, or find copies that I can try without having to buy them.

    Eventually in 2018-11, I bought a CAT B30 phone.

    So I had to figure out how to move the contact list to the CAT B30 phone.

    Copy the file from the Nokia 216 to the computer. Here I used the default filename "backup.dat" (I actually save the backups in more sensible names including a date so I can distinguish backups, but this is just an example). Get the backup rewritten with this one-liner:

    perl -we 'while (<>) { if (/^\r\n\z/) {} elsif ($c) { $c=0; /^(.*);;;\ +r\n\z/ or die "nc $_"; $f=";CHARSET=UTF-8;ENCODING=QUOTED-PRINTABLE:" +; print "N$f;$;$1;;;\r\nFN$f$1\r\n"; } elsif (/^N;/) { /^N;ENCODING=Q +UOTED-PRINTABLE;CHARSET=UTF-8:;=\r\n\z/ or die; $c=1; } elsif (/^TEL; +/) { /^TEL;[A-Z]+;(CELL:[+0-9#*p]+)\r\n/ or die qq(tel $_); print "TE +L;CELL:$1\r\n"; } elsif (/^(BEGIN:|VERSION:|END:)/) { print } else { +die "parse $_" } }' backup.dat > backup.vcf

    This outputs the records in the correct format for the CAT B30:


    Copy the output file ("backup.vcf") onto the CAT B30, and this time you can use any filename and any directory, as long as the extension is ".vcf". Delete the existing phone numbers from the contacts list, because the CAT B30 doesn't do this automatically, it merges the contact list. Then import the contacts from the contacts menu by selecting that file on the SD card.

    (This script doesn't handle assigning the custom ringtones to family members directly, I'll set those directly in the phone.)

Dancer2 Module for Generating HTML from markdown
1 direct reply — Read more / Contribute
by nysus
on Nov 03, 2018 at 20:50

    I wrote a new module for converting markdown files to HTML for use with Dancer2. No perl code is necessary to use! Just make some entries in your config.yml file and you are done. I think it'll be very useful. For example, you can now just throw your copy into a repo with markdown files and clone it to your local hard drive and you are done ever having to mess with HTML.

    It has two basic modes of operation right now: convert a single markdown file to a single HTML document or convert all markdown files in a directory to an HTML document. It can can also automatically generate a table of contents for the file that links to the headers in the content.

    I've used it to generate a pretty nifty looking tutorial that's super easy to navigate with a table of contents that's actually useful. It's all on one page so no annoying clicking around.

    See it on CPAN and the GitHub repo.

    I plan on making it have tight integration with GitHub in the near future. I also plan on creating yet another local POD viewer with it as well.

    I'd appreciate feedback and ideas for further improvements. Thanks!

    $PM = "Perl Monk's";
    $MCF = "Most Clueless Friar Abbot Bishop Pontiff Deacon Curate Priest Vicar";
    $nysus = $PM . ' ' . $MCF;
    Click here if you love Perl Monks

Implementing a 2-mode "Audiobutton" on the Raspberry Pi
1 direct reply — Read more / Contribute
by cavac
on Oct 25, 2018 at 06:40

    Some time ago, i saw those fake "emergency stop" buttons on Amazon that play a funny audio sample when you press them. But they were too expensive and - much more important - you can't change the sound on them. So i implemented my own.

    First i hooked up a real (non-latching) emergency button to a Raspberry Pi GPIO pin and a set of rather ancient and crappy Desktop speakers to the analog out of the Pi. I also prepared some audio files in raw format: Decoding MP3 costs performance, so it is not as instantaneous as playing a raw file, also the Pi runs some other performance critical stuff and using raw files instead of wav/mp3 fixed some timing issues.

    A short button press plays one of the 3 "bullshit detected" samples. Holding the button for about 1 second before releasing plays the 30 second jeopardy "thinking" music.

    Note: This is actually the "dumb" version of the script. My local implementation also triggers some LED display for the bullshit alerts and runs an analog meter (via an Arduino) from 100% to 0% while the jeopardy music ticks down the seconds.

    perl -e 'use MIME::Base64; print decode_base64("4pmsIE5ldmVyIGdvbm5hIGdpdmUgeW91IHVwCiAgTmV2ZXIgZ29ubmEgbGV0IHlvdSBkb3duLi4uIOKZqwo=");'
Perl and Kubernetes
No replies — Read more | Post response
by reisinge
on Oct 23, 2018 at 10:26

    Kubernetes documentation uses Perl (a one-liner) in an example Job:

    apiVersion: batch/v1 kind: Job metadata: name: pi spec: template: spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: Never backoffLimit: 4

    They're not afraid to mix and match venerable and newer technologies :-).

    What do you care what other people think? -- Richard Feynman

Add your CUFP
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":

  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.
  • Log In?

    What's my password?
    Create A New User
    and the web crawler heard nothing...

    How do I use this? | Other CB clients
    Other Users?
    Others imbibing at the Monastery: (7)
    As of 2018-12-11 11:55 GMT
    Find Nodes?
      Voting Booth?
      How many stories does it take before you've heard them all?

      Results (54 votes). Check out past polls.

      • (Sep 10, 2018 at 22:53 UTC) Welcome new users!