Beefy Boxes and Bandwidth Generously Provided by pair Networks
more useful options
 
PerlMonks  

Cool Uses for Perl

( #1044=superdoc: print w/ replies, xml ) Need Help??

This section is the place to post your general code offerings -- everything from one-liners to full-blown frameworks and apps.

CUFP's
Minimal, %ENV based templating tool
1 direct reply — Read more / Contribute
by blindluke
on Nov 08, 2015 at 09:12

    There is a nice, minimal templating tool out there, called skel. Its author, Scott Vokes, describes it as a "tiny command-line skeleton/snippet thing", and it's probably accurate.

    Given a template file name, it expands all the occurences of #{VARIABLE} with the value obtained from the %ENV.

    The tool has proven quite useful to me, mostly due to the way it provides a lot of useful features while still allowing you to use it in the most primitive way imaginable.

    Inspired by the simplicity of it, I tried rewriting the most basic version of the idea in Perl, adding the one (the only?) feature I was missing - asking for variable values when the ENV var is unset.

    #!/usr/bin/perl use v5.14; my %cfg = ( tmpl_path => $ENV{HOME}.'/.skel-closet/', opening_tag => '#{', closing_tag => '}', ); sub prompt { my $msg = shift; local $| = 1; local $\; warn "$msg \n"; my $ans = <STDIN>; defined $ans ? chomp $ans : warn "\n"; return $ans; } sub filled_line { my $line = shift; my $pattern = qr/$cfg{opening_tag}(.+?)$cfg{closing_tag}/; $line =~ s/$pattern/$ENV{$1} ? $ENV{$1} : prompt("$1 unset. Value: +") /eg; return $line; } my $filename = $cfg{tmpl_path}.shift; open (my $fh, '<', $filename) or die ("Could not open file: $filename"); my @output; while (<$fh>) { push @output, filled_line($_); } close ($fh); print for @output;

    The use is straightforward. Here's an example:

    $ cat ~/.skel-closet/test This is the value of HOME: #{HOME} This is the value of EDITOR: #{EDITOR} This is the value of FOO: #{FOO} $ ./pskel.pl test > out.txt FOO unset. Value: foovalue $ cat out.txt This is the value of HOME: /home/blj This is the value of EDITOR: emacs This is the value of FOO: foovalue

    - Luke

ciscodump - dump config info from routers into files
2 direct replies — Read more / Contribute
by soonix
on Nov 05, 2015 at 09:59

    This is a script for reading config info from a bunch of routers (for archiving, autiting, whatever)

    The configuration for the script itself is read from a INI-style config file (example below). Each router is represented by a section with its name or IP as the section header. The "info" attribute is mandatory (originally intended as title information for a to-be-generated report). Then there has to be a section "dump" (which means if you have a router with a hostname of "dump", you have to refer to it by its IP address).

    Each entry in the "dump" section results in a command sent to the router(s), and the output of this command is written to a file.

    The output is placed in the directory named by "dir" in the top INI section. Each router gets its own subdirectory, and if set "debug", there will be a subdirectory "debug" - so you should not use "debug" as hostname, either :-)

    The "enable" password is needed, because on Cisco devices, the terminal size command is privileged.

    Initially, I used Net::Telnet::Cisco, and later adapted it for SSH. After trying with SSH modules directly, I settled with Control::CLI. It took me a while noticing that they use different ways of to specify prompts, before getting it to work.

    Of course it is bad to have passwords stored as plaintext, but at least they are kept separate from the script itself. It runs under Windows, but should work under other systems, too.

    Additionally, I uploaded it to a github gist, just in case someone wants to tinker.

Yet Another threaded find|grep perl utility
6 direct replies — Read more / Contribute
by QuillMeantTen
on Nov 04, 2015 at 06:10

    Greetings, fellow monks
    Today I was given a C assignment : writing a program that once given a top directory to search, a filename to search for and a max number of coexisting threads will return the absolute path of the file if it is found inside the directory or the paths of multiple similarly named files

    The assignment is quite specific on the way it should be done:

    A main thread creates a named pipe and then uses it to receive subdirectory paths. Those subdirectory paths are used to create new threads that will look for the specified filename inside the directory and send back subdirectories to the main thread.

    I decided to indulge in my new favorite vice : before getting down and dirty with C, write a high level Perl prototype to spot the potential bugs and hurdles and make the C code writing easier.
    Here it is :D
    I hope you enjoy reading it, it is by no mean a better mousetrap, just my take on an interesting problem under specific constraints.

    As usual, even if that last line makes this post improper for CUFP (at least for one person who told me so):
    If you see bad practices, things that can be made better, do post about them, no point in making mistakes if I dont learn from them :)

    Update: The teacher updated the requirements, making it a bit more fun, now I have to look for a string in any file named *.txt or if both string and filename specified find said file and look for target string inside it.

    Also I started using getopt::std to make option handling cleaner also the script now displays the permissions for each file/dir found. enjoy!

    After re reading the assignment I put in some more changes. I am almost through with the C implementation. The striking difference is the number of lines written: 500 to 205 in Perl. Here is the last version of the perl prototype:

Modules as configuration files
1 direct reply — Read more / Contribute
by Discipulus
on Oct 28, 2015 at 08:59
    Ok is not so cool nor elaborate but i find it useful..
    Nowadays i end writing programs with a lot of options and I'm happy using Getopts::Long to grab them all. During testing but even during normal usage of such programs I hate tought to have a long command line, spwaning two or more lines of my terminal screen. As happens with long arguments list given to the find program.

    Along many other faults, i tend to write programs with core only modules, and no modules for configuration files are in the core.

    Even if minimalist a configuration file can contain comments and indentation..

    The basic idea of Perl module come in the rescue. Infact a module can be conviently imported even before any program you wrote by simply adding -M as in perl -MModuleName program.pl and as long the module resides in the current directory, no other Perl's switches are needed.
    Even more: since @ARGV is global by nature, the module can modify @ARGV before Getopts::Long starts his wonderful work inside the main program.

    The results is a short module like this:
    use strict; use warnings; unshift @ARGV,$_ for reverse map {s/\s*#.*$|^\s*#.*$//; (split ' ',$_,2)} split /\n/, <<'EOCONF' --put here -your --list of arguments and #any #indented --comment #comments --too EOCONF ; print "@ARGV" unless caller; 1;
    Unshifting them let an option specified in the module to be overwritten by the same option in the command line. For example if the module contains --verbosity 3 you can call the program perl -MConfigDefault program.pl --verbosity 0 and have the right behaviour.
    Please note that split as the LIMIT specified, so in the above exapmle --list has as value of arguments and ie only two arguments are created for each line. This is the desired behaviour.
    The final result seems like:
    perl -MConfigDefault program.pl --verbosity 0
    The print "@ARGV" unless caller; is inspired by the Modulino idea: when the program is invoked as program he build up the list and also print them. This way the module can contains configuration also for non Perl programs and receive them via xargs.
    For example if you have a long find configuration in ConfFind.pm you can invoke find this way:
    perl ConfigFind.pm | xargs find
    i hope you'll find it useful

    L*

    There are no rules, there are no thumbs..
    Reinvent the wheel, then learn The Wheel; may be one day you reinvent one of THE WHEELS.
SDR Scanner(aka Threaded wxPerl Example)
No replies — Read more | Post response
by jmlynesjr
on Oct 19, 2015 at 22:01

    SDR Scanner(aka Threaded wxPerl Example)

    Background: The July 2015 issue of Nuts and Volts magazine had an article on using an inexpensive DVB-T(Digital Video Broadcast-Terrestrial-EU Standard) USB dongle along with a 24MHz upconverter as the basis for an HF and higher Software Defined Receiver(SDR). The software used in the article was SDR# an MS-Windows application. As I am an Ubuntu user, I looked for a similar Linux application. I found gqrx. While waiting on parts to build the upconverter, I installed gqrx-sdr from the Ubuntu Software Center. It ran and pulled in various signals from 24MHz up, but was unstable. Turns out this was a pretty old version and after uninstalling and then installing via sudo add-apt-repository ppa:gqrx/snapshots, a current stable version was found. The output of gqrx can be routed through pulseaudio to other software(like fldigi) that can decode various digital modes.

    gqrx has a Remote Control feature that can accept commands(a subset of the Amateur Radio Rigctrl protocol) over a Telnet connection and more digging turned up a perl script(gqrx-scan) that uses this interface, through Net::Telnet, to implement a scanner feature. gqrx-scan has many bells and whistles that I wouldn't ever use so I wrote a lite/crude version(lite.pl) that just scans from F1 to F2 repeatedly. For more flexibility(and practice) I decided to expand lite.pl into a wxPerl application(threadedgqrxLite.pl), mostly a bunch of text controls and sizers. A future enhancement might include one or more lists of frequencies of interest, i.e. local repeaters, satellites, etc.

    For a hands-on introduction to SDR, take a look at websdr.com, a world-wide collection of SDR servers.

    What resulted is also an example of a wxPerl GUI operating in parallel with Perl Threads. YMMV. TMTOWTDI.

    James

    There's never enough time to do it right, but always enough time to do it over...

Radoteur
1 direct reply — Read more / Contribute
by QuillMeantTen
on Oct 17, 2015 at 12:07

    Greetings fellow monks,
    Today I bring you a nice perl script that I wrote in class.
    It was in OOP class and I needed a prototype before I started the implementation in java so I chose my favourite language.
    After a first implementation I discovered some intriguing aspects of the following algorithm. Because I wanted to understand its implications I needed more data so I rewrote it again and again. This is quite an interesting algorithm indeed.

    The radoteur (as explained in the comments at the beginning of the code) is an algorithm developed by a french researcher. It will take a list of events and will cycle through it creating a new stream. A first event is selected and when it reoccurs the immediate event after that one is then selected. Here I use word list but you could do it on images using each pixel rgb value or anything that comes to ind.

    so here is the code, have fun, I tried it on /usr/share/dict/words as well as the rockyou.txt wordlist that you can find on kali linux, the results are most interesting. Some time real words appear from the mess, expressions even. I hope you will have as much fun playing with it as I had writing it.

    Cheers!

    P.S. If you see any improvements or corrections to my code, please do tell me so, I know its not an SOPW post, yet I want to improve so feel free to send some feedback my way, it won't go to /dev/null.

tree command
5 direct replies — Read more / Contribute
by Skeeve
on Oct 13, 2015 at 04:09

    I'm, every now and then, missing a "tree" command at places where I have to work.

    And every now and then I "reinvent the wheel".

    To put an end to this, I decided to present my current version here.

    I hope it's useful to others and maybe I'll get some constructive feedback?

    Update: Added AppleFritter's suggestion of hiding hidden files

    #!/bin/env perl use strict; use warnings; use Getopt::Long qw(:config no_ignore_case); use Pod::Usage; my ( $showall, # show hidden files $showfiles, # show also files $showlinks, # show also a symlink's target ); help() unless GetOptions( a => \$showall, l => \$showlinks, f => \$showfiles, 'h|help' => \&help, 'm|man' => \&man, ); sub help { pod2usage(-verbose=>1); } sub man { pod2usage(-verbose=>2); } my $indent = ' '; my $indir = '--'; foreach my $path (@ARGV ? @ARGV : '.') { print $path,"\n"; traverse($path, ''); } sub traverse { my ($path, $depth) = @_; # Open the directory opendir my $dh, $path or die "Couldn't read $path: $!\n"; # Get all directory content - leaving out files unless asked for my(@content) = grep { not /^\.\.?$/ and ( $showfiles or not -f "$path/$_" ) and ( $showall or /^[^.]/ ) } readdir $dh; closedir $dh; # How many eitems are in the directory? my $count= scalar @content; # Prepare the standard indent my $i= $depth . '|' . $indent; # Print all the elements foreach my $sub (@content) { my $p= "$path/$sub"; # Prepare the last indent $i= $depth . ' ' . $indent unless --$count; print $depth, ($count ? '|' : '\\'), $indir , $sub; # Is it a link? if ( -l $p ) { # Shall they be shown as such if ($showlinks) { print " -> ", readlink $p; } print "\n"; next; } print "\n"; # Done unless it's a directory next unless -d $p; traverse($p, $i); } return; } =head1 NAME tree - A script to show a "graphical" representation of a directory st +ructure =head1 SYNOPSIS tree [options] [path...] =head1 DESCRIPTION tree will show a "graphical" representation of a directory structure, +including all files (when B<-f> specified) and link targets (when B<-l> specifie +d). =head1 OPTIONS =over 4 =item B<-f> Show also files. =item B<-l> Shhow also link targets. =item B<-h> =item B<--help> show a short help page. =item B<-m> =item B<--man> Show the man-page. =back =head1 AUTHOR Skeeve of perlmonks.org (perlmonks DOT org DOT Skeeve at XoXy dot net) Including ideas of Apple Fritter, a fellow Perl monk =cut

    s$$([},&%#}/&/]+}%&{})*;#$&&s&&$^X.($'^"%]=\&(|?*{%
    +.+=%;.#_}\&"^"-+%*).}%:##%}={~=~:.")&e&&s""`$''`"e
Edit files automatically cross-platform, line-endings unknown
No replies — Read more | Post response
by stevieb
on Sep 30, 2015 at 16:56

    I had a need for a piece of my software to be platform independent when reading/writing files with any type of line endings. So, I'm sure there are better/more efficient ways to do this, but I couldn't find one and really wanted to better understand record separators, so I wrote File::Edit::Portable.

    This module will read in a file on any platform, save the existing record separators (line endings), and modify them to the local platforms endings for use. It'll return either a file handle or an array of the file contents.

    You can optionally then push back an array of contents, and the module will rewrite the file (or optionally a new one), using the line endings that were found while reading (or optionally a user supplied one). This means it'll open Unix files on Windows and rewrite the file with Unix endings, and vice-versa on Windows and Mac (the three I've tested).

    use warnings; use strict; use File::Edit::Portable; my $rw = File::Edit::Portable->new; my $recsep; $recsep = $rw->recsep('unix.txt'); print "unix file before write: $recsep\n"; $recsep = $rw->recsep('win.txt'); print "win file before write: $recsep\n"; my $ufh = $rw->read(file => 'unix.txt'); my @unix_contents = <$ufh>; close $ufh; $rw->write(contents => \@unix_contents); $recsep = $rw->recsep('unix.txt'); print "unix file after rewrite: $recsep\n"; my $wfh = $rw->read(file => 'win.txt'); my @win_contents = <$wfh>; close $wfh; $rw->write(contents => \@win_contents); $recsep = $rw->recsep('win.txt'); print "win file after rewrite: $recsep\n";

    Output:

    # unix $ ./find.pl unix file before write: \0a win file before write: \0d\0a unix file after rewrite: \0a win file after rewrite: \0d\0a # windows E:\test>perl find.pl unix file before write: \0a win file before write: \0d\0a unix file after rewrite: \0a win file after rewrite: \0d\0a

    We've also got a non-OO interface:

    use File::Edit::Portable qw(pread pwrite); my $fh = pread('file.txt'); # or even my @contents = pread('file.txt'); # then, later pwrite('file.txt', \@contents);

    I definitely know it's quite inefficient, but it's my first working prototype. There are a lot of things missing (unicode and open filters for instance). Criticisms or existing ways to do this are very welcome.

    -stevieb

Smoothsort
1 direct reply — Read more / Contribute
by QuillMeantTen
on Sep 20, 2015 at 07:42

    After a first try that contained some (at least for me) hard to track bugs in the function used to preserve the max heap property of trees and then some much needed refactoring and variable renaming to make the code less cryptic (both thanks to the hints given to me by Athanasius, praised may be his name) I can now give here a working perl implementation of smoothsort.
    If anyone can give me hints to make it even more understandable and readable I will update it

    Update :@Athanasius: got rid of line 110 and its error leftover from my print debug statements

    @ww: Got rid of 2 to make it proper for CUFP, since it seems that saying that improvements suggestions are welcome make this post unfit for CUFP

    Link to SoPW post

Automatically inject trace code into Perl files
No replies — Read more | Post response
by stevieb
on Sep 13, 2015 at 17:01

    About 10 years ago, I wrote a project that contained a dozen modules and a couple of hundred subs. Following the flow was a nightmare until I added trace code to each sub. I then set out on a journey to develop a module that will automatically inject this tracing, and after 10 years of off-and-on Perl programming, these objectives are now a reality in my new Devel::Trace::Subs module.

    This module will install the appropriate use statement, along with the appropriate tracing call to all subs (functions or methods) within a file, within all files in a directory (selective by file extension) or within production modules live-time by using a Module::Name. Of course, you can back this automation out simply with a different call.

    The typical SYNOPSIS will work, using the traditional use Devel::Trace::Subs qw(trace); and then adding the trace(); call to every single sub you want to support, but automation is what programming is for.

    We'll start with the most basic example, a single script file with multiple subs:

    use warnings; use strict; one(); exit(0); sub one { my $str = 'hello, world!'; two($str); } sub two { my $str = shift; $str =~ s/hello/goodbye/; three($str); } sub three { my $str = shift; print "$str\n"; }

    Which does this when run...

    goodbye, world!

    Now we'll install tracing into it (easiest from the command line, but you can of course script it as well):

    perl -wMstrict -MDevel::Trace::Subs=install_trace -e 'install_trace(fi +le => "trace.pl");'

    Let's check to see what happened to our original file:

    use Devel::Trace::Subs qw(trace trace_dump); # injected by Devel::Trac +e::Subs use warnings; use strict; one(); exit(0); sub one { trace() if $ENV{DTS_ENABLE}; # injected by Devel::Trace::Subs my $str = 'hello, world!'; two($str); } sub two { trace() if $ENV{DTS_ENABLE}; # injected by Devel::Trace::Subs my $str = shift; $str =~ s/hello/goodbye/; three($str); } sub three { trace() if $ENV{DTS_ENABLE}; # injected by Devel::Trace::Subs my $str = shift; print "$str\n"; }

    We have to add a couple of lines of code to the calling script manually (in this case, the calling script is the same file that contains the subs we're tracing, so we add them there).

    $ENV{DTS_ENABLE} = 1; # add this line before the first sub call trace_dump(); # add this line after the last sub call

    NOTE: To disable all tracing ability globally, simply set the single environment variable to a false value (or comment it out, etc).

    Now let's see what the output is:

    goodbye, world! Code flow: 1: main::one 2: main::two 3: main::three Stack trace: in: main::one sub: - file: ./trace.pl line: 8 package: main in: main::two sub: main::one file: ./trace.pl line: 19 package: main in: main::three sub: main::two file: ./trace.pl line: 27 package: main

    To remove all traces of the auto install feature, simply:

    perl -wMstrict -MDevel::Trace::Subs=remove_trace -e 'remove_trace(file + => "trace.pl");'

    ...and then manually remove the $ENV{DTS_ENABLE} = 1; and trace_dump(); lines from the calling script (again, in this case, it was all done in a single file).

    This was the most basic example. I have tested it on my projects that have numerous modules/packages, as well as live files by specifying a directory or Module::Name to the 'file' parameter in the install_trace() CLI call.

    install_trace() parameters (only file is mandatory):

    • file => 'filename' or dir, or Module::Name
    • extensions => [qw(pl pm)] which is the default for dirs

    trace_dump() parameters (all are optional):

    • want => 'string' where 'string' is either 'flow' or 'stack', which will dump only that portion. Default is both
    • type => 'html' dumps the output in HTML table format instead of plain text table format
    • file => 'file.ext' the dump output will be placed in this file, regardless of format

    TESTING WITH A LIVE MODULE:

    Ever wanted to see what a module you use frequently does internally? Let's take Data::Dump:

    sudo perl -MDevel::Trace::Subs=install_trace -e 'install_trace(file=>" +Data::Dump");'
    perl -MData::Dump -MDevel::Trace::Subs=trace_dump -e '$ENV{DTS_ENABLE} +=1; dd {a => 1}; trace_dump' { a => 1 } Code flow: 1: Data::Dump::dd 2: Data::Dump::dump 3: Data::Dump::_dump 4: Data::Dump::tied_str 5: Data::Dump::_dump 6: Data::Dump::format_list Stack trace: in: Data::Dump::dd sub: - file: -e line: 1 package: main in: Data::Dump::dump sub: Data::Dump::dd file: /usr/lib/perl5/site_perl/5.22.0/Data/Dump.pm line: 84 package: Data::Dump in: Data::Dump::_dump sub: Data::Dump::dump file: /usr/lib/perl5/site_perl/5.22.0/Data/Dump.pm line: 36 package: Data::Dump in: Data::Dump::tied_str sub: Data::Dump::_dump file: /usr/lib/perl5/site_perl/5.22.0/Data/Dump.pm line: 292 package: Data::Dump in: Data::Dump::_dump sub: Data::Dump::_dump file: /usr/lib/perl5/site_perl/5.22.0/Data/Dump.pm line: 331 package: Data::Dump in: Data::Dump::format_list sub: Data::Dump::dump file: /usr/lib/perl5/site_perl/5.22.0/Data/Dump.pm line: 65 package: Data::Dump
    sudo perl -MDevel::Trace::Subs=remove_trace -e 'remove_trace(file=>"Da +ta::Dump");'

    CAVEATS:

    • does not yet catch subs that have an opening brace that is after a newline on the line a sub is defined fixed in v0.06
    • this module places a Storable file inside of the directory the base calling script is called from. I'm still trying to figure out a way to make this non-root and cross-platform for scripts that are in root-only writable dirs
    • although it does inject into AnonySubs, if the anon sub is a one-liner, it will currently be overlooked (due to not having implemented PPI insertion here yet... next version)
    • I'm sure there are other subtle bugs as this is pretty well first version

    There are too many todo's to list here as this is first incarnation. I'm hoping some others will find interest and do some real-world testing and tell me how bad the code is, so I can fix those issues while I continue to try to better my coding practice. That said, my biggest two are encompassing more within my PPI regime, and related to that, fixing the insertions/deletions to *all* subs that use all declarative structures, and the removal of such, including newlines added.

    DEPENDENCIES:

    There are quite a few. The most important are Devel::Examine::Subs v1.43+, Template, HTML::Template, and dependencies on those modules: PPI, Data::Compare and a couple of other small ones.

    -stevieb

Recursively list all files on a network shared drive on windows
No replies — Read more | Post response
by ambrus
on Sep 11, 2015 at 11:03

    Use this script on a windows system to recursively list all files on a network shared drive (or a local directory if you wish). Change the value of $outfname and @startpath before you run it.

    The files are listed with meta-data. First column contains the file size, or total size of files conained in a directory, followed by an indicator of the file type: "/" for a directory, "@" for a symlink, "~" for a symlink to directory, "?" for error in lstat, "!" for error in opendir. The latter two markers could show up when files are deleted while you are producing the listing. Second column is creation date, third column is modification date, last is filename and, for symlinks, link target.

    #!perl use warnings; use 5.016; use Time::HiRes (); use Encode; use Win32::LongPath (); use Unicode::Collate (); our $outfname = q(C:\somedir\list.txt); our @startpath = ( "\\\\HOSTNAME\\sharename", ); open our$outf, ">:encoding(utf8)", $outfname or die; our $pro_count = 0; our $pro_time = Time::HiRes::time(); our $pro_gran = 1251; our $coll = Unicode::Collate->new; sub visit { my($path, $addto_ref) = @_; my$stat = Win32::LongPath::lstatL($path); my$isdir = $stat && 0 != (0x10 & $$stat{attribs}); my$islink = $stat && 0 != (0x400 & $$stat{attribs}); my$typestr = !$stat ? "?" : $islink ? ($isdir ? "~" : "@") : $isdi +r ? "/" : " "; my$mdatestr = "?"; my$cdatestr = "?"; if ($stat) { my($ms,$mm,$mh,$md,$mb,$my) = gmtime($$stat{ctime}); $mdatestr = sprintf "%04d-%02d-%02d", 1900+$my, 1+$mb, $md; my($cs,$cm,$ch,$cd,$cb,$cy) = gmtime($$stat{mtime}); $cdatestr = sprintf "%04d-%02d-%02d", 1900+$cy, 1+$cb, $cd; } my$readlink = $islink ? Win32::LongPath::readlinkL($path) : undef( +); my$linkstr = defined($readlink) ? "\t>" . $readlink : ""; my$size = $stat ? 0 + $$stat{size} : 0; if (!$islink && $isdir) { my$dh = Win32::LongPath->new; my$ok = $dh->opendirL($path); if ($ok) { $^E = 0; my@n = $dh->readdirL; if (18 != $^E) { warn qq(warning: readdir "$path" $^E ), 0 ++$^E; } @n = $coll->sort(@n); $dh->closedirL; for my$n (@n) { "." eq $n || ".." eq $n and next; if (!length($n) || $n =~ y|/\\||) { warn qq(wtf strange name in directory: "$path" "$n +"); next; } my$p = $path . "\\" . $n; my$a = \$size; visit($p, $a); } } else { $typestr = "!"; } } $$addto_ref += $size; printf $outf "%12d%1s %10s %10s %s%s\n", $size, $typestr, $mdatestr, $cdatestr, $path, $linkstr; $outf->flush; if (0 == ++$pro_count % $pro_gran) { my$n = Time::HiRes::time(); my$d = $n - $pro_time; printf STDERR qq(progress %9d %.2f "%s"), $pro_count, $d, enco +de_utf8($path); Time::HiRes::sleep(0.9 + 2.2 * abs($d)); printf STDERR qq(;\n); $pro_time = Time::HiRes::time(); } } for my$p (@startpath) { warn qq(starting from "$p"); my$total_sz = 0; visit($p, \$total_sz) } warn qq(lsrecursive all done.); __END__

    Update: changed text to describe ascended bug in the program where the ctime and mtime were swapped.

create_c_project with tap harness
1 direct reply — Read more / Contribute
by QuillMeantTen
on Sep 11, 2015 at 06:34

    Greetings fellow monks,
    Too sick to be able to crawl to my classes I sat at my workbench and started hammering away at my latest obsession After devouring calcote fine autotools book I finally got it to work as I wanted it to (the obsession, not the book).
    Alas as many of you may know trial and error is a bitch especially if your mind is deprived from his usual clarity by the wicked veils of sickness.

    Pretty sure that by the end of my exhaustion induced nap with my cat I would have completely forgotten how I did it (and thanks to said veils I have not versionned correctly each trial) I set up to write the following script :

    Given a path to your newly created c project folder and one to your c-tap-harness folder it will create a new project with the set of files needed by autotools.
    Fellow monks that dabble in the Deep and Dark arts of C programming I give you those tools so you can use them or discard them as you see fit.

    As always I am eager for ways to get better at perling so if you spot errors or ways to improve that script I am all electronic ears and shall follow up on your suggestions

    Yours, still relatively coherent,

    and now : create_c_project.pl:
    Update: thanks for the good idea about filepath, I have updated the code and got rid of a silly bug related to the placement of AUTOMAKE_INIT and such things inside configure.ac

    Yet Another update : fixed some issues and made the resulting project folder easier to work with (mainly by sprinkling it with symlinks and moving object files around)

    And now the last update, I got everything working so make dist really distribute everything important

    And the last update, promised, this time the template makefile works for distclean type commands

C test suite management with perl
1 direct reply — Read more / Contribute
by QuillMeantTen
on Sep 09, 2015 at 12:46

    Greetings fellow monks,
    for my studies at the university I recently endeavored with a friend to prepare the best infrastructure for the long term projects we are to be given later this year.
    Amongst other things we set up a continuous integration environment, an xmpp server for team communications and that kind of things.
    Since we both are big fans of test driven development and I fell in love with the TAP protocol as I started writing my own modules I decided to give this library a shot. After some manipulations I got it to work and started writing on scripts that would allow us to easily build and centralize test logs.

    Thing is in previous projects we did that kind of thing in big and unwieldy bash scripts. I decided to do it in perl but hit quite the roadblock : my friend does not know perl and has no intention to learn it in the foreseeable future.

    I prepared the following solution :

    One makefile that calls the build.pl script (with whatever parameter are needed) and a run_test.pl script that goes through all the files in the tests directory, run all tests, log results, if some tests fail it creates two files, one for all failed test for the file tested and another with all the tests for the file tested to give it some context.

    here is build.pl

    here is run_tests.pl :

    and the main project makefile :

    The unchecked target is used to build the project without running the test scripts.
    It is important to take into account that inside the tests folder each subfolder contains its own makefile to build the test executable and then clean after tests have been run.

    As always I post here hungry for ways to make that code better, more efficient or solve overlooked issues or design problems.
    Also I hope that someone might have a use for it ^^
    Kind Regards.

Doomsday algorithm
3 direct replies — Read more / Contribute
by QuillMeantTen
on Sep 06, 2015 at 10:06

    Thanks to this piece on wired I learnt about conway's doomsday algorithm to get the day of the week of any date.
    Trying to wrap my head around the algorithm I decided to implement it as a learning exercise.
    Here is the code, enjoy :D

    Update, now with use strict!

Amazon S3 Etag calculator
No replies — Read more | Post response
by jellisii2
on Sep 04, 2015 at 08:02

    With a bit of research (here and here primarily), I sorted out a pure perl way to calculate the etag for an object on S3.

    My motivation was that I'm having to sync data from an S3 bucket for $work so a user can do work on the files that are put there by a third party. I wanted to ensure that the files I had were complete and hadn't changed.

    The only external requirement to actually use the calculator is to know the chunk size of the multi-part upload. I was able to guess this by running some rough math, but the person who uploaded the file (or you, if you're uploading the file) can provide this value to you.

    use strict; use warnings; use Digest::MD5 qw(md5 md5_hex); # needs a file name and a value for the multi-part chunk size in MB. print "etag for $ARGV[0] = " . calculate_etag(@ARGV) . "\n"; sub calculate_etag { my ($file, $chunk_size) = @_; print "Calculating etag of $file...\n"; my $string; my $count = 0; open(my $FILE, '<', $file); binmode $FILE; while (read($FILE, my $data, 1048576 * $chunk_size) { my $chunk_md5 = md5($data); $string .= $chunk_md5; $count++; } close($FILE); return(md5_hex($string) . "-$count"); }

Add your CUFP
Title:
CUFP:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":


  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.
  • Log In?
    Username:
    Password:

    What's my password?
    Create A New User
    Chatterbox?
    and the web crawler heard nothing...

    How do I use this? | Other CB clients
    Other Users?
    Others taking refuge in the Monastery: (5)
    As of 2016-07-30 10:07 GMT
    Sections?
    Information?
    Find Nodes?
    Leftovers?
      Voting Booth?
      What is your favorite alternate name for a (specific) keyboard key?


















      Results (265 votes). Check out past polls.