Beefy Boxes and Bandwidth Generously Provided by pair Networks
"be consistent"

Cool Uses for Perl

( #1044=superdoc: print w/replies, xml ) Need Help??

This section is the place to post your general code offerings -- everything from one-liners to full-blown frameworks and apps.

My first cpan module - App::ForKids::LogicalPuzzleGenerator
3 direct replies — Read more / Contribute
by pawel.biernacki
on Feb 23, 2018 at 14:06

    Hi! I would like to introduce my module - my first contribution to It is generating a logical puzzle. Get it from You can use it as follows:

    use App::ForKids::LogicalPuzzleGenerator; my $x = App::ForKids::LogicalPuzzleGenerator->new(range=>3, amount_of_facts_per_session => 4); print $$x{intro_story}; print $$x{story}; print $$x{solution_story};

    It is heavily using AI::Prolog. An example of such puzzle is below:

    John,Patrick and James live here. Each has a different favourite fruit (pinapples,apples,pears). Each has a different profession (fisherman,blacksmith,witcher).

    - My name is John. The one who likes apples is not a blacksmith. Patrick is not a witcher. James does not like pinapples. James is not a fisherman.

    - My name is James. John does not like pears. Patrick does not like apples. I don't like apples. The one who likes apples is not a fisherman.

    John likes apples. John is a witcher. Patrick likes pinapples. Patrick is a fisherman. James likes pears. James is a blacksmith.

    Pawel Biernacki
Clean Up Empty Directories
4 direct replies — Read more / Contribute
by GotToBTru
on Feb 16, 2018 at 16:50
    The code somebody else wrote cleans out old files, but leaves the directories behind. This cleans up the directories.

    #!/usr/bin/perl use strict; use warnings; chomp(my @list = `du -kh /mnt/edi/si51/documents`); my $dltd = 0; foreach my $line (@list) { my ($size,$path) = split /\t/, $line; $size =~ s/\D//g; if ($size == 0) { rmdir $path && $dltd++ } } printf "%d directories deleted.\n",$dltd;

    UPDATE: There are several things that were in an earlier version of this script that didn't make the second cut, but only because I got lazy. My original got deleted somehow, and I had foolishly not kept a copy, so I wrote the above quickly.

    The directory structure is documents/4digityear/abbreviatedcardinalmonth/2digitday/hour/minute. At first I restricted deletions to directories above some number of days old, but rmdir updates the directory time information, meaning a directory that was now empty because all its empty constituent directories were gone looked like it was brand new. This made it useless to run consecutively. I came up with a calculation that used the directory tree to come up with the age, and that worked. I just didn't bother with it when I rewrote the script this time. Some of the alternate solutions don't have that limitation.

    But God demonstrates His own love toward us, in that while we were yet sinners, Christ died for us. Romans 5:8 (NASB)

MCE gather and relay demonstrations
No replies — Read more | Post response
by marioroy
on Feb 13, 2018 at 00:32

    Fellow Monks,

    I received a request from John Martel to process a large flat file and expand each record to many records based on splitting out items in field 4 delimited by semicolons. Each row in the output is given a unique ID starting with one while preserving output order.

    Thank you, John. This is a great use-case for MCE::Relay (2nd example).

    Input File -- Possibly larger than 500 GiB in size

    foo|field2|field3|item1;item2;item3;item4;itemN|field5|field6|field7 bar|field2|field3|item1;item2;item3;item4;itemN|field5|field6|field7 baz|field2|field3|item1;item2;item3;item4;itemN|field5|field6|field7 ...

    Output File

    000000000000001|item1|foo|field2|field3|field5|field6|field7 000000000000002|item2|foo|field2|field3|field5|field6|field7 000000000000003|item3|foo|field2|field3|field5|field6|field7 000000000000004|item4|foo|field2|field3|field5|field6|field7 000000000000005|itemN|foo|field2|field3|field5|field6|field7 000000000000006|item1|bar|field2|field3|field5|field6|field7 000000000000007|item2|bar|field2|field3|field5|field6|field7 000000000000008|item3|bar|field2|field3|field5|field6|field7 000000000000009|item4|bar|field2|field3|field5|field6|field7 000000000000010|itemN|bar|field2|field3|field5|field6|field7 000000000000011|item1|baz|field2|field3|field5|field6|field7 000000000000012|item2|baz|field2|field3|field5|field6|field7 000000000000013|item3|baz|field2|field3|field5|field6|field7 000000000000014|item4|baz|field2|field3|field5|field6|field7 000000000000015|itemN|baz|field2|field3|field5|field6|field7 ...

    Example One

    This example configures a custom function for preserving output order. Unfortunately, the sprintf function alone involves extra CPU time causing the manager process to fall behind. The workers may idle while waiting for the manager process to respond to the gather request.

    use strict; use warnings; use MCE::Loop; my $infile = shift or die "Usage: $0 infile\n"; my $newfile = 'output.dat'; open my $fh_out, '>', $newfile or die "open error $newfile: $!\n"; sub preserve_order { my ($fh) = @_; my ($order_id, $start_idx, $idx, %tmp) = (1, 1); return sub { my ($chunk_id, $aref) = @_; $tmp{ $chunk_id } = $aref; while ( my $aref = delete $tmp{ $order_id } ) { foreach my $line ( @{ $aref } ) { $idx = sprintf "%015d", $start_idx++; print $fh $idx, $line; } $order_id++; } } } MCE::Loop::init { chunk_size => 'auto', max_workers => 3, gather => preserve_order($fh_out) }; mce_loop_f { my ($mce, $chunk_ref, $chunk_id) = @_; my @buf; foreach my $line (@{ $chunk_ref }) { $line =~ s/\r//g; chomp $line; my ($f1,$f2,$f3,$items,$f5,$f6,$f7) = split /\|/, $line; my @items_array = split /;/, $items; foreach my $item (@items_array) { push @buf, "|$item|$f1|$f2|$f3|$f5|$f6|$f7\n"; } } MCE->gather($chunk_id, \@buf); } $infile; MCE::Loop::finish(); close $fh_out;

    Example Two

    To factor out sprintf from the manager process, another way is via MCE::Relay for incrementing the ID value. Workers obtain the current ID value and increment/relay for the next worker, ordered by chunk ID behind the scene. Workers call sprintf in parallel. This allows the manager process (out_iter_fh) to accommodate up to 32 workers and not fall behind. It also depends on IO performance, of course.

    The MCE::Relay module is loaded automatically whenever the MCE init_relay option is specified.

    use strict; use warnings; use MCE::Loop; use MCE::Candy; my $infile = shift or die "Usage: $0 infile\n"; my $newfile = 'output.dat'; open my $fh_out, '>', $newfile or die "open error $newfile: $!\n"; MCE::Loop::init { chunk_size => 'auto', max_workers => 8, gather => MCE::Candy::out_iter_fh($fh_out), init_relay => 1 }; mce_loop_f { my ($mce, $chunk_ref, $chunk_id) = @_; my @lines; foreach my $line (@{ $chunk_ref }) { $line =~ s/\r//g; chomp $line; my ($f1,$f2,$f3,$items,$f5,$f6,$f7) = split /\|/, $line; my @items_array = split /;/, $items; foreach my $item (@items_array) { push @lines, "$item|$f1|$f2|$f3|$f5|$f6|$f7\n"; } } my $idx = MCE::relay { $_ += scalar @lines }; my $buf = ''; foreach my $line ( @lines ) { $buf .= sprintf "%015d|%s", $idx++, $line } MCE->gather($chunk_id, $buf); } $infile; MCE::Loop::finish(); close $fh_out;

    Relay accounts for the worker handling the next chunk_id value. Therefore, do not call relay more than once inside the block. Doing so will cause IPC to stall.

    Regards, Mario

Easily back up all of your Github repositories and/or issues
No replies — Read more | Post response
by stevieb
on Feb 11, 2018 at 16:25

    It's been in the works at the lower-end of my priority list, but after having a bit of a bug-closing weekend, thought I'd tackle getting out an initial release of Github::Backup.

    The cloud is a great thing, until the sun evaporates it one way or another. Github, although fantastically reliable, is prone to issues just like any other site on the Internet. I'd go as far to say that even they could be prone to data loss in very rare circumstances.

    This distribution, which provides a command-line binary, allows you to quickly and easily back up your repositories and issues to your local machine. The repositories are cloned so all data is retrieved as-is as legitimate Git repos, and the issues are fetched and stored as JSON data. Useful if there was ever a catastrophic issue at Github, or simply for offline perusal of your information.

    At a basic level, you need to send in your Github username, API token (see this), a directory to stash the data retrieved, and a flag to signify you want to back up either your repos, issues or both.

    github_backup \ -u stevieb9 \ -t 003e12e0780025889f8da286d89d144323c20c1ff7 \ -d /home/steve/github_backup \ -r \ -i

    That'll back up both repos and issues. The structure of the backup directory is as follows:

    backup_dir/ - issues/ - repo1/ - issue_id_x - issue_id_y - repo2/ - issue_id_a - repo1/ - repository data - repo2/ - repository data

    Now, most don't like supplying keys/tokens/passwords on the command-line or within a script, so you can stash your Github API token into the GITHUB_TOKEN environment variable, and we'll fetch it from there instead:

    github_backup -u stevieb9 -d /home/steve/github_backup -r -i

    Full usage for the binary:

    Usage: github_backup -u username -t github_api_token -d /backup/direct +ory -r -i Options: -u | --user Your Github username -t | --token Your Github API token -d | --dir The backup directory -p | --proxy Optional proxy ( -r | --repos Back up all of your repositories -i | --issues Back up all of your issues -h | --help Display this help page

    The API is very straightforward as well:

    use warnings; use strict; use Github::Backup; # token stashed in GITHUB_TOKEN env var my $gh = Github::Backup->new( api_user => 'stevieb9', dir => '/home/steve/github_backup' ); # back up all repos $gh->repos; # back up all issues $gh->issues;

    This is one distribution that I've released prior to being happy with my unit test regimen, so that's on the imminent to-do list. There are tests, but as always, there can never be enough. In this case, I, myself am not even happy, so if you run into any issues, please open a ticket, or reply back here.

    Going forward, I plan on adding functionality to independently back up *all* Github data for a user, not just repos and issues. I also plan to test restore operations, but that's not anything I'm considering short-term.

    Have fun!


    Disclaimer: Also posted on my blog.

Shell (bash/zsh) completion for dzil
1 direct reply — Read more / Contribute
by tinita
on Feb 09, 2018 at 13:37
    I created shell completion scripts for dzil. The completion that is shipped with Dist::Zilla only completes subcommands, and only is for bash, as far as I can see.
    My scripts also complete options, and show the description of subcommands and options.
    (If dzil commands change, I have to update this, too, of course.)
    I created this with

    # bash $ git clone $ cd shell-completions $ source bash/dzil.bash $ dzil <TAB> add -- add modules to an existing dist authordeps -- list your distributions author dependencies build -- build your dist clean -- clean up after build, test, or install commands -- list the applications commands help -- Show command help install -- install your dist listdeps -- print your distributions prerequisites new -- mint a new dist nop -- do nothing: initialize dzil, then exit release -- release your dist run -- run stuff in a dir where your dist is built setup -- set up a basic global config file smoke -- smoke your dist test -- test your dist $ dzil test --<TAB> --all -- enables the RELEASE_TESTING, AUTOMATED_TESTING, EX +TENDED_TESTING and AUTHOR_TESTING env variables --author -- enables the AUTHOR_TESTING env variable --automated -- enables the AUTOMATED_TESTING env variable (defaul +t behavior) --extended -- enables the EXTENDED_TESTING env variable --help -- Show command help --jobs -- number of parallel test jobs to run --keep-build-dir -- keep the build directory even after a success --keep -- keep the build directory even after a success --lib-inc -- additional @INC dirs --release -- enables the RELEASE_TESTING env variable --test-verbose -- enables verbose testing (TEST_VERBOSE env variable + on Makefile.PL, --verbose on Build.PL --verbose -- log additional output --verbose-plugin -- log additional output from some plugins only # zsh # put zsh/_dzil into a directory which is read by # zsh completions, or add this directory to your .zshrc: # fpath=("$HOME/path/to/shell-completions/zsh" $fpath) # log in again
Short GitHub Markdown emojis
No replies — Read more | Post response
by reisinge
on Jan 24, 2018 at 07:57

    I wanted to use an emoji in a README file on GitHub. Since I plan to use it often I wanted to pick a short one:

    curl -sL | perl -nE '/(:[^:]+:)/ && length $1 <= + 5 && say $1'
    Leave no stone unturned. -- Euripides
Terms shortener :-)
1 direct reply — Read more / Contribute
by reisinge
on Jan 09, 2018 at 08:22
    echo 'authentication' | perl -F'' -lape '$_ = $F[0] . (@F-2) . $F[-1]'
    Modified: added -a as advised by choroba
    And like that ... he's gone. -- Verbal
X12Splitter: A Tool For Splitting X12-Formatted .dat Files
1 direct reply — Read more / Contribute
by bpoag
on Dec 30, 2017 at 21:30
    Now, if you're like most people, most of your day is spent wandering around aimlessly and asking yourself, "Man, I wish I had a Perl script that would take like a huuuuge X12-formatted file, and split it up into input files, each one no greater than 1500KB, or 2500 claims, whichever comes first. Wow, if I had that... Man, I'd even be willing to edit the hardcoded output path in that script to suit where I wanted those chunks to go!" Well, look no further:
    #!/usr/bin/perl ## ## X12Splitter written 043013 by Bowie J. Poag ## ## X12Splitter takes an X12-formatted .dat file, and splits it ## up into inputFiles no greater than 1500KB or 2500 claims, ## whichever comes first. ## ## Usage: ## ## x12splitter <filename> ## ## Example: ## ## x12splitter foo.dat ## $|=1; $numRecords=0; $numBytes=0; $fileName=$ARGV[0]; errorCheckAndPrep(); dumpChunks(); sub errorCheckAndPrep { print "\n\nX12Splitter: Checking $fileName for any structural probl +ems.."; @inputFile=`cat $fileName`; @temp=`ls -l $fileName`; @fileDetails=split(" ",$temp[0]); $fileSize=$fileDetails[4]+0; $numElements=scalar(@inputFile); $numTotalBytes=length($inputFile[0]); if ($numElements > 1) { print "X12Splitter: Input file is malformed. Exiting..\n"; exit(); } else { print ".."; } if ($fileSize!=$numTotalBytes) { print "X12Splitter: Payload size and stated file size mismatch. +Exiting.\n"; exit(); } else { print ".."; } if ($inputFile[0]=~/^ISA/) { print "Done.\n"; } print "X12Splitter: Check complete. Parsing file..\n"; @payload=split("~ST",$inputFile[0]); $envelopeOpen=$payload[0]; $envelopeClose=$payload[-1]; $envelopeClose=~/~GE/; $envelopeClose="~GE$'"; $payload[-1]=$`; if ($envelopeOpen=~/^ISA/ && $envelopeClose=~/~GE/) { print "X12Splitter: Evenvelope open and close chunks found succe +ssfully.\n"; } else { print "X12Splitter: Unexpected problem with envelope open. Openi +ng ISA header or ~GE close not found.\n"; exit(); } shift (@payload); ## Don't bother processing the envelope.. foreach $item (@payload) { $recordCount++; $openRecordText=substr($item,0,15); $closeRecordText=substr($item,length($item)-40,40); printf ("\rX12Splitter: Record %6d: [%15s.....%-40s] \r", $recor +dCount, $openRecordText, $closeRecordText); } print "\nX12Splitter: $recordCount total records found. Splitting.. +\n"; } sub dumpChunks { $chunkPayload=""; $chunkNum=0; $numBytesInThisChunk=0; $numRecordsInThisChunk=0; foreach $item (@payload) { $numBytesInThisChunk=length($chunkPayload); $numRecordsInThisChunk++; $chunkPayload.="~ST$item"; if ($numRecordsInThisChunk>2000 || $numBytesInThisChunk>1000000) { $chunkPayload="$envelopeOpen"."$chunkPayload"."$envelopeClose +"; open ($fh,'>',"/demo/fin/healthport/$fileName.part.$chunkNum" +); print $fh "$chunkPayload"; close ($fh); print "X12Splitter: $numRecordsInThisChunk records saved to / +demo/fin/healthport/$fileName.part.$chunkNum\n"; $numBytesInThisChunk=0; $numRecordsInThisChunk=0; $chunkNum++; $chunkPayload=""; } } ## Clean up the last of it.. $chunkPayload="$envelopeOpen"."$chunkPayload"."$envelopeClose"; open ($fh,'>',"/demo/fin/healthport/$fileName.part.$chunkNum" +); print $fh "$chunkPayload"; close ($fh); print "X12Splitter: $numRecordsInThisChunk records saved to / +demo/fin/healthport/$fileName.part.$chunkNum\n"; } print "\n\n\n";
SDBM databases: Alternate Keys with Duplicates
1 direct reply — Read more / Contribute
by erichansen1836
on Oct 22, 2017 at 14:39

    Perl SDBM databases of key/value pairs (tied to program hash tables) can hold/house multiple format KEYS - which is convenient for persistent random access indexing to Flat File database records. <Note: The VALUE in the KEY/VALUE pairs is used to store the byte offsets of the Flat File records indexed>. For EXAMPLE... If you have a Flat File database (of millions of records) having the fixed-length records, random access indexed, by Social Security Number (UNIQUE PRIMARY KEY), you may also wish to have an ALTERNATE KEY WITH DUPLICATES too, in case the Social Security Number is not known for Look Up. The below code snippet (incomplete, used just to illustrate a methodology) shows how this may be accomplished to setup the indexing. Once the indexing is setup, you can use a FOR LOOP iterated from: 1 to $NUM_RECS, to random access retrieve all the Flat File database records matching any arbitrary compound KEY, composed of info contained with the fields of the records. Note: The FILE POINTER is set to any record byte offset before performing READ/WRITE operations. This is ISAM(Indexed Sequential Access Method), NoSQL, Embedded database technology. This indexing stays persistent, so that Lookup is immediately available every time you launch your database user-interface (or batch) application program. SDBM is in the public domain, so you can distribute your FlatFile/SDBM database files and Perl Application Code, FREE of CHARGE to as many companies and end-users as you like. For more discussion and Perl code examples at Perl Monks, see "JOINT DATABASE TECHNOLOGY" thread.

    TO BE CLEAR... You can use ANY programming language you like (e.g. Perl, Python, C++, etc.) with SDBM support, and ANY batch or GUI DB user-interface you like (e.g. portable Perl TK, Perl Win32-GUI by Aldo Calpini), and ANY file handling syntax you like (Win32-API, portable sysopen/sysseek/sysread/syswrite/close, etc..

    #-- YYYYMMDD #-- Key example: BirthDate|LastNameFirst4Chars|FirstNameInitia +l|StateCode #-- "19591219|Will|K|TX" #-- $KEY without a Seq Nbr is used to increment the number of rec +ords saved to the database #-- having a particular ALT KEY w/DUPS - in this example: "1959 +1219|Will|K|TX" $KEY=$BirthDate . "|" . $LastNameFirst4Chars . "|" . $FirstNameIn +itial . "|" . $StateCode; $Hash{$KEY}=0; #-- Now index the first record encountered in the Flat File datab +ase with this particular ALT KEY w/DUPS $num_recs = $Hash{$KEY}; $num_recs++; #-- i.e. one(1) $Hash{$KEY}=$num_recs; $newKEY=$KEY . "|" . $num_recs; #-- produces: "19591219|Will|K|TX|1" $Hash{$newKEY}= #-- The VALUE would be set to the byte offset o +f the Flat File record just indexed #-- Now index the second record encountered in the Flat File data +base with this particular ALT KEY w/DUPS $num_recs = $Hash{$KEY}; $num_recs++; #-- i.e. two(2) $Hash{$KEY}=$num_recs; $newKEY=$KEY . "|" . $num_recs; #-- produces: "19591219|Will|K|TX|2" $Hash{$newKEY}= #-- The VALUE would be set to the byte offset o +f the Flat File record just indexed #-- and so on...
Determining Gaps and Overlap in Timestamped Data
No replies — Read more | Post response
by haukex
on Oct 20, 2017 at 11:16

    I've recently been working with large sets of timestamped measurement data from different devices, often recorded at different times on different days and spread across multiple files. Since I'm not always involved in the recording of the data, I need to look at when the devices were turned on and off, any gaps in the data, etc., in particular for which spans of time all devices were measuring at the same time, since that's the data that then needs to be analyzed. The timestamps are jittery, and data doesn't always come in order (or, equivalently, I'd like to not have to sort everything by timestamp). Set::IntSpan's union and intersect operations make this pretty easy!

Finding matching filenames in a directory tree [mz2255]
No replies — Read more | Post response
by 1nickt
on Oct 19, 2017 at 15:01

    Earlier today a new monk (mz2255) attempted to post a question on SoPW about recursively searching for files in a directory tree. He was having issues with excluding . and .. and also with rel2abs and nested readdir calls and what have you. He was unable to get the SoPW to post and ended up posting on his scratch pad, so here is a reply for mz2255, and a demonstration of what I would call the modern way to do the job, using Path::Tiny.

    Note that the regexp is minimally modified from the OP and likely needs improvement before it can be used reliably for the OP's desired outcome. Left here for demo purposes.

    use strict; use warnings; use feature qw/ say /; use Data::Dumper; $Data::Dumper::Sortkeys = 1; use Path::Tiny; my $root_dir = Path::Tiny->tempdir; _populate_for_demo( $root_dir ); my $re = qr/ (?:\w|\d)+ _ \w+ _ .+ _R(1|2)_ .+ /x; my %results; $root_dir->visit( sub { $_->is_file and push @{ $results{$1} }, "$_" if /$re/ }, { recurse => 1 }, ); say Dumper \%results; exit; sub _populate_for_demo { my $temp_dir = shift; path("$temp_dir/$_/aa_bb_cc_R1_dd.tmp")->touchpath for 'foo','bar' +; path("$temp_dir/$_/aa_bb_cc_R2_dd.tmp")->touchpath for 'baz','qux' +; return $temp_dir; } __END__
    $ perl $VAR1 = { '1' => [ '/tmp/0JbuMoAJix/bar/aa_bb_cc_R1_dd.tmp', '/tmp/0JbuMoAJix/foo/aa_bb_cc_R1_dd.tmp' ], '2' => [ '/tmp/0JbuMoAJix/baz/aa_bb_cc_R2_dd.tmp', '/tmp/0JbuMoAJix/qux/aa_bb_cc_R2_dd.tmp' ] };

    Update: moved creation of the temp dir to main for clarity

    The way forward always starts with a minimal test.
HollyGame gamekit (almost @ CPAN)
2 direct replies — Read more / Contribute
by holyghost
on Oct 15, 2017 at 04:22
    This is the first implmentation of HollyGame, it is a framework underneath e.g. SDL 1.2 in my code or or buildable with SDL 1.2 or cairo 1.2 or 2.x. If I debug it, it will try to host it on CPAN

    Now follows an implementation of the game Wycadia based on the above code :

Unidatab-CGI reloaded
1 direct reply — Read more / Contribute
by emilbarton
on Oct 13, 2017 at 04:07
Indexed Flat File databases (for ISAM, NoSQL, Perl Embedded databases)
1 direct reply — Read more / Contribute
by erichansen1836
on Oct 08, 2017 at 11:13

    TOPIC: FAST!! Random Access Indexed, Relational Flat File Databases, Indexed by external Perl SDBM databases of key/value pairs tied to program "in memory" hash tables, where the Key in the Key/Value Pair is one or more fields and/or partial fields concatenated together (separated by a delimiter such as a pipe "|") and contained within the Flat File records for you to arbitrarily seek to a single record or a sorted/related group of records within your database.

    Since it has been over 2 years ago since I first posted about this TOPIC I discovered, I wanted to alert the Perl community to the original thread where you can find Perl source code now for examples of how to implement Joint Database Technology/Methodology. Inparticular the King James Bible Navigator software DEMO I posted which used FlatFile/SDBM for its database. I have made this a native Windows GUI application (TreeView/RichEdit COMBO interface) to demonstrate how to show your end-users a summary of the information of the data contained within a database, and allow them to drill down to a small amount of specific information (e.g. verses within a single book/chapter) for actual viewing (and retrieving from the database). The TreeView Double Click Event was originally written to random access the first verse within a chapter, then sequentially access the remaining verses within a chapter - performing a READ for each verse. I posted a separate modified TreeView Double Click Event for you to insert into the Application which reads an entire chapter in one (1) giant READ, breaking out the individual verses (into an array) using the UNPACK statement. -- Eric

    Joint Database Technology:

STFL Terminal UI - Concurrency Demonstrations
2 direct replies — Read more / Contribute
by marioroy
on Oct 07, 2017 at 20:16

    Hello brothers and sisters of the monastery,

    I came across a dated article by Philip Durbin (from 10/2011). I thought to give MCE::Hobo a try and see how it goes.

    The STFL library, a curses-based widget set for text terminals, compiles seamlessly on the Linux platform. Ncurses development libraries and swig are needed on CentOS 7.3. I've not tested on other Unix platforms.

    sudo yum install ncurses-devel swig tar xzf /path/to/stfl-0.24.tar.gz cd stfl-0.24 # Modify Makefile and comment out the SWIG lines #ifeq ($(FOUND_SWIG)$(FOUND_PERL5),11) #include perl5/Makefile.snippet #endif #ifeq ($(FOUND_SWIG)$(FOUND_PYTHON),11) #include python/Makefile.snippet #endif #ifeq ($(FOUND_SWIG)$(FOUND_RUBY),11) #include ruby/Makefile.snippet #endif sudo make install # Finally, build the Perl module cd perl5 swig -Wall -perl stfl.i perl Makefile.PL sudo make install


    From the STFL documentation, a special language is used to describe the STFL GUI.

    ** * example.stfl: STFL layout for and ** vbox hbox .expand:0 @style_normal:bg=yellow,fg=black label text:'Little STFL Program' label["label 1"] text["text 1"]:"10000" label["label 2"] text["text 2"]:"20000" label["label 3"] text["text 3"]:"30000" table .expand:0 @input#style_focus:bg=blue,fg=white,attr=bold @input#style_normal:bg=blue,fg=black @input#.border:rtb @L#style_normal:fg=red @L#.expand:0 @L#.border:ltb @L#.spacer:r label#L text:'Field A:' input .colspan:3 text[value_a]:'foo' tablebr label#L text:'Field B:' input text[value_b]:'bar' label#L text:'Field C:' input text[value_c]:'baz' label .expand:v .tie:bl text[helpmsg]:''

    Each worker increments a shared counter. What is cool about STFL is being able to enter text while the counters increment simultaneously in the terminal. Pressing F2 signals the workers to exit. F1 spawns new Hobo workers. Pressing ESC or Ctrl-C (handled by MCE::Signal) exits the application.

    #!/usr/bin/env perl use strict; use warnings; use stfl; use MCE::Hobo 1.831; use MCE::Shared; use Time::HiRes qw/sleep/; my $count1 = MCE::Shared->scalar(10000); my $count2 = MCE::Shared->scalar(20000); my $count3 = MCE::Shared->scalar(30000); my $layout; { open my $fh, "<", "example.stfl" or die "open error 'example.stfl': $!"; local $/; $layout = <$fh>; } my $f = stfl::create($layout); my $s = 0; # MCE::Hobo 1.832 and later releases will set posix_exit # automatically when present, $INC{''}. MCE::Hobo->init( posix_exit => 1 ); sub bg_start { unless ($s) { mce_async { sleep(0.9), $count1->incr() while 1 }; mce_async { sleep(0.6), $count2->incr() while 1 }; mce_async { sleep(0.3), $count3->incr() while 1 }; $s = 1; } } sub bg_stop { if ($s) { $_->exit()->join() for MCE::Hobo->list(); $s = 0; } } $f->set('helpmsg', '[ ESC = exit | F1 = start | F2 = stop ]'); bg_start(); while (1) { my $event = $f->run(50); if ($s) { # must stringify in case numeric value $f->set('text 1', ''.$count1->get()); $f->set('text 2', ''.$count2->get()); $f->set('text 3', ''.$count3->get()); } next unless (defined $event); bg_start() if $event eq 'F1'; bg_stop() if $event eq 'F2'; last if $event eq 'ESC'; } bg_stop();

    Here, workers enqueue the form ID and value into a queue. The main process makes one trip to the shared-manager, maximum 3 replies.

    #!/usr/bin/env perl use strict; use warnings; use stfl; use MCE::Hobo 1.831; use MCE::Shared; use Time::HiRes qw/sleep/; my $q = MCE::Shared->queue(); my $layout; { open my $fh, "<", "example.stfl" or die "open error 'example.stfl': $!"; local $/; $layout = <$fh>; } my $f = stfl::create($layout); my $s = 0; # MCE::Hobo 1.832 and later releases will set posix_exit # automatically when present, $INC{''}. MCE::Hobo->init( posix_exit => 1 ); mce_async { my $c = 10000; sleep(0.9), $q->enqueue([ 'text 1', ++$c ]) while 1; }; mce_async { my $c = 20000; sleep(0.6), $q->enqueue([ 'text 2', ++$c ]) while 1; }; mce_async { my $c = 30000; sleep(0.3), $q->enqueue([ 'text 3', ++$c ]) while 1; }; $f->set('helpmsg', '[ ESC = exit ]'); while (1) { my $event = $f->run(50); foreach my $ret ($q->dequeue_nb(3)) { # must stringify in case numeric value $f->set($ret->[0], ''.$ret->[1]); } next unless (defined $event); last if $event eq "ESC"; } $_->exit()->join() for MCE::Hobo->list();

    Entering text into an input box and have other areas of the form update automatically is quite nice. Furthermore, a worker may run an event loop and not impact the main process. There are a lot of possibilities.

    Regards, Mario

Add your CUFP
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":

  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.
  • Log In?

    What's my password?
    Create A New User
    and all is quiet...

    How do I use this? | Other CB clients
    Other Users?
    Others taking refuge in the Monastery: (4)
    As of 2018-03-17 07:16 GMT
    Find Nodes?
      Voting Booth?
      When I think of a mole I think of:

      Results (223 votes). Check out past polls.