Beefy Boxes and Bandwidth Generously Provided by pair Networks
Problems? Is your data what you think it is?
 
PerlMonks  

Cool Uses for Perl

( #1044=superdoc: print w/ replies, xml ) Need Help??

This section is the place to post your general code offerings.

CUFP's
Case Exhaustion Tree
3 direct replies — Read more / Contribute
by Xiong
on Feb 02, 2014 at 15:33

    For some time now I've been obsessed with clean decision trees. No solution is appropriate for all needs and, as usual, tradeoffs are in play. Today I hacked out one particular way of simplifying these trees.

    # Four-outcome tree my $cf = !!$code; my $af = !!( $args and ref $args ); if ( $cf && $af ) { @ARGV = @$args; } elsif ( $cf && !$af ) { @args = (); } elsif ( !$cf && $af ) { @args = @$args; $code = $unit . q{(@args)}; } elsif ( !$cf && !$af ) { @args = (); $code = $unit . q{()}; } else { die 'Fifth element!' };

    Of course tim toady; and I don't doubt others will rush to display their favorite decision trees. This one may work for me sometimes.

    Death only closes a Man's Reputation, and determines it as good or bad. —Joseph Addison
Static analyzer for brainfuck code.
No replies — Read more | Post response
by ohcamacj
on Jan 13, 2014 at 00:23

    I recently spent a while staring at some brainfuck code (in particular, the rot13 decoder on the oldest http://golf.shinh.org brainfuck challenge), trying to understand how it worked.

    I didn't succeed, but ended up writing a static analyzer to eliminate simple loops like [<+>-] which occur fairly often.

    The static analyzer,
C and other stuff
2 direct replies — Read more / Contribute
by perlaintdead
on Jan 12, 2014 at 09:41
    I posted this but then reconsidered because honestly it's really not that great but ww had beef so here's the post:

    I made this little module to carry over some things i liked about C as well as some other stuff. Just drop it into your folder and do a  require 'spice.pm'; link: http://code.google.com/p/perl-spice/source/browse/trunk/spice.pm The perldoc doesn't cover all the functions. I need to update it.

    UPDATE: implemented most of tobyink's suggestions and corrected spelling of promt to prompt by suggestion of no_slogan
    # Copyright 2013,2014 Dakota Simonds # This program is free software: you can redistribute it and/or mod +ify # it under the terms of the GNU General Public License as published + by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/license +s/>. use warnings; use strict; sub EXIT_SUCCESS (){ #C style 1; } sub TRUE (){ 1; } sub FALSE (){ 0; } sub forceflush{ our $| = 1; return EXIT_SUCCESS; } sub savefile{ #savefile(filename, data); my $SVFL; open $SVFL, ">>", $_[0]; syswrite $SVFL, $_[1]; close $SVFL; return EXIT_SUCCESS; } sub assert{ #C style my @assertions = @_; foreach my $testAssertion (@assertions){ if( not eval $testAssertion ){ print "Assertion failure: '$testAssertion'\n"; exit 0; } #else{ print "win\n"; } } return EXIT_SUCCESS; } sub prompt{ my $question = shift; my $style = shift; my $styleOut; my $input; my %styles=( plain => "\x20", normal => ': ', yn => ' (y/n) ? ', ); if($style eq ""){ $styleOut = $styles{"plain"}; } else{ $styleOut = $styles{$style}; } { #these currlies makes redo work when input is not valid print $question, $styleOut; $input = <STDIN>; chomp $input; if(not $input =~ m/(y|n)/i and $style eq "yn"){ print "\nThat is not a valid input!\n"; redo; } } return $input; } sub strcat{ my $compiled; for my $str (@_) { $compiled = $compiled . $str; } return $compiled; } sub longCatIsLoooooooooooooooooooooooooooooooooooooooooooooooooooooooo +oooooooooooooooooooooooooooooooooooooooooooooooonnngggggg{ return ''; } 1; __END__ =head1 NAME Spice =head1 SYNOPSIS technically all these are functions but these three return true or false and take no arguments. meant to reduce magic numbers. EXIT_SUCCESS - C style return (true) TRUE - returns 1 FALSE - returns 0 forceflush - same as $|=1 savefile - appends data to a file assert - C style fuction. Like eval but if code reurns a fal +se the program exits otherwise returns 1. takes a list. prompt - get input in one line strcat - a funtion the concatinates it's inputs savefile - writes to a file in one line =head1 DESCRIPTION Simple PERL module to that fixes small anoying things and provides + a bit of C style funtionality =head1 EXAMPLES forceflush; while( TRUE ){ do something } savefile("meows.txt", "data"); if($foobar == FALSE){ do something } prompt("keywords","normal"); prompt("do you want a cookie", "yn"); prompt("do you want a cookie (Y/n)?", "plain"); assert('$a=1'); savefile($filename, $data); return EXIT_SUCCESS; =cut
The Grand Cat Dilation Equation
4 direct replies — Read more / Contribute
by perlaintdead
on Jan 11, 2014 at 10:34

    The basic theory behind this is the more you age the more cats you acquire. I'm asserting that you don't get your first cat until you are at least 19.

    print "age: "; my $age = <STDIN>; chomp $age; print int(( ($age - 12) / 10 ) / 0.618), " cats";
Halogen: A Tool For Monitoring IBM pSeries LPAR and Hypervisor Metrics
1 direct reply — Read more / Contribute
by bpoag
on Dec 31, 2013 at 12:49
    This post is the third i've made recently regarding an effort where I work to get a better handle on systems monitoring. In this post, we'll discuss Halogen, an in-house tool we've developed in Perl to extract, parse, and make sense out of hidden performance metrics from IBM pSeries machines.

    For anyone who's worked with IBM's pSeries line, you're probably familiar with the concept of LPARs. You're probably also familiar with the fact that IBM doesn't like people snooping around at the hypervisor layer without their blessings, let alone permission.

    Halogen is a tool that allows you as an administrator to peek into the black box, and monitor LPAR and hypervisor metrics in near-realtime.

    These days, doing LPAR virtualization requires a device called an HMC in order to manage it. The HMC is presented to the customer (you) as a walk-up 1U console that optionally can be accessed remotely via a web front-end. You might be surprised to learn that the HMC also gathers and stores a whole slew of different metrics behind the scenes for IBM, for the purposes of problem determination and analysis when things go wrong.

    Lucky for us, the HMC accepts ssh connections, and allows users to authenticate using the same credentials used for logging into the web frontend. There's also a small set of commands available in the shell that aren't generally given to customers, 'lslparutil' and 'lshwres'. Between these two commands, we can automate the process of passively extracting and making sense out of performance metrics, live, as they are recorded.

    It's just a guess on my part, but, i'm guessing IBM doesn't really advertise the fact that the HMC collects and stores performance metrics because the metrics themselves are horribly confusing and obtuse. There's also quite a bit of them; an individual sampling of one moment in time may return upwards of 50 different values. The only way we've been able to make use of these metrics is by careful examination of the data itself, to see how the numbers themselves vary from moment to moment. I would call it back-engineering if it weren't for the fact that at least the table columns are named, albeit poorly.

    As I stated above, Halogen relies principally on two commands.. lslparutil and lshwres. The first one dumps the performance metrics, while the second allows you to make sense of what you're seeing from a configuration standpoint. Between the two, it's also possible to do a little bit of data correlation, and thus build a picture of hypervisor-layer statistics. Halogen wakes up every 5 minutes, uses ssh-pass to collect the last 5 minutes worth of metrics recorded on the HMC, and dumps the raw data into a MySQL database with column names that match the names given in the data. From there, the parts we're interested in are parsed.

    Part of the difficulty in making sense of the data pumped out by lslparutil is how IBM chose to record the values from sample to sample. First off, they're only dumped to disk on the HMC perodically...(as best we can tell, roughly every 5 minutes give or take), which imposes both a limit on data granularity as well as a limit to how fresh a given set of readings presented to the user can be; To make matters worse, the values which end up being recorded are NOT recorded in terms of deltas accumulated since the last sample pass; they are more akin to meter readings, so, in order to make sense out of the data and get the deltas you're looking for, your script must either remember key points or look back at previous values within an output stream. Did I mention the contents of the stream may or may not be in chronological order, may or may not contain mentions of the resources you're looking for, and may not even be fully populated as well? :)

    Figuring out pSeries metrics is a mess. I would liken it to figuring out how fast a car is going by measuring how much the odometer reading has changed in relation to the angle of the car's shadow on the road. You can figure out how much time has elapsed by using the car's shadow as a sundial, and and use the odometer to measure distance. If you know the two of those, then you can divine the car's speed. Thankfully, IBM's "odometer" and "shadow angle" measurements are extremely precise, so you'll get good results in the end....but it's horribly clunky. I'm at a loss to explain why IBM does it this way.

    How clunky? Here's an example.. Here's the code snippet in Halogen that determines real processor utilization for a given LPAR. Doing so requires these sort of gymnastics be done mathematically:

    ## You have 6 measurements by this point in the code. ## Each one is anywhere from 0 to a number in the gazillions. ## lastCC = Capped CPU cycle count recorded in the last sample. ## lastUC = Uncapped CPU cycle count recorded in the last sample. ## lastEC = Entitled CPU cycle count recorded in the last sample. ## CC = Capped CPU cycles as of now. ## UC = Uncapped CPU cycles as of now. ## EC = Entitled CPU cycles as of now. if ($EC-$lastEC>0) ## Div by zero check { $delta=((($CC-$lastCC) + ($UC-$lastUC)) / ($EC-$lastEC)) * 100; }
    In English, if you want to know how much CPU a given LPAR is chewing up at any given point in time, you need to get a sum... of the differences.... between the capped cycle count, and uncapped cycle count, between now and the last viable instance of each cycle count.... divided by the difference in entitled cycle count between then and now.........times a hundred. :) Again, why IBM did it this way versus simply dump a value that reflects the impact the LPAR is having on the overall resource pool, I have no idea.. But this is how we're able to divine what the picture looks like at the hypervisor layer; it's a matter of finding the right jigsaw puzzle pieces, and reconstructing what the data isn't being straightforward about.

    To handle the flood of data, we basically dump every line into a MySQL database, both to make querying the data simpler (why have Perl do the work of parsing when you can offload the work to the SQL server?) as well as giving us historical metrics we can look back on to determine growth patterns for planning purposes.

    Despite it all, it is nonetheless possible to build a script in Perl intelligent enough to parse through the mountain of data being supplied by lslparutil, within the context of lshwres, and recreate the data you need. I'd imagine that lpar2rrd does this same trick to some extent, but, what lpar2rrd lacks, Halogen makes up for. Once a sane and clear picture of system performance can be obtained, it's possible to do reporting.. And with reporting, alerting.

    For example, let's say we know from lshwres that four LPARs Alpha, Beta, Gamma and Delta are on a given pSeries box. By parsing their combined real CPU usage metrics out of lslparutil, we can infer the overall CPU load being placed on the cores by each LPAR. By seeing what load we're placing on the cores, we then have a valuable piece of information we can graph over time to see everything from if performance drag is being caused by CPU pool depletion to whether or not we need to buy new hardware to handle future demand. Same goes for alerting. In our particular setup, Halogen alerts us if a given pSeries box has greater than 95% CPU utilization for more than 5 minutes. This gives us a heads-up to when our customers may begin seeing performance degrade, and perhaps offload one or more LPARs to more idle servers to free up resources.

    We've also built a front-end to Halogen that allows us to view metrics in terms of groups; all of our VIO servers, for example, are in one view... all of our database servers in another.. all of our app servers in another... So we can keep an eye on multiple systems that have a shared impact.

    At some point in the next few months, we're going to explore the possibility of having Halogen automate the process of dynamically moving LPARs around via Partition Mobility to quieter systems in the manner mentioned above -- That if Halogen sees a pSeries box being overburdoned for too long, it will attempt to mitigate the issue by PM'ing the LPAR somewhere else, continually keeping all of our pSeries boxes at roughly equal utilization. Here's how it looks:

    http://imgur.com/1rfsEU0

    I keep this panel up during work hours, just to keep tabs on what's going on globally. It's nice to be able to have answers on-hand when someone comes by asking if a given server or application seems slow. It also allows us to call BS on vendors who claim our systems aren't keeping up with the demands of their products. Perhaps most importantly, we have eyes where we did not have them before, and can administer all of our systems in a more intelligent fashion.

    Cheers,

    Bowie J. Poag

Tripwire: A Tool For Intelligent Parsing of Syslog Messages
2 direct replies — Read more / Contribute
by bpoag
on Dec 20, 2013 at 12:59
    As I mentioned in an earlier post, there's been a big push where I work to extend the horizon a bit when it comes to our systems monitoring. As a result, we've come up with a number of in-house tools to accomplish this. One is called 'Tripwire'.

    (BTW, I was several weeks into development before someone casually mentioned that there's already a product out there called Tripwire. Oops! In any event, we still continue to refer to it as such, so, forgive me for any confusion. We're not talking about the network administration tool.)

    Tripwire's job is simple. For anything that arrives on our syslogd server, analyze it, and tell us whether it's something out of the ordinary/looks suspicious/looks like something we should be concerned about. To pull this off, however, is a bit more complicated.

    At the top of the chain, our syslogd server funnels everything it receives into a MySQL database. This is actually a feature of syslogd, not something we came up with. In any event, this database gets populated non-stop, at a rate of about 200 messages per second...The entirety of everything we can think to point syslog at, from SAN devices to VMWare, to host OS'es to our own apps, they are all told to funnel diagnostic messages to a syslogd server where they eventually end up in a MySQL database we can draw from.

    Every 5 minutes, Tripwire wakes up, and grabs a list of "suspect words" we've given it (like 'fail', or 'error', or 'critical', etc.) along with a list of keywords and phrases we've instructed Tripwire to ignore, along with the last 5 minutes worth of messages that arrived in the MySQL database.

    In Perl, we compare each line against the list of suspect words. The results of this comparison are then passed to another routine that attempts to exclude them based on the list of words and phrases we've told it to ignore. This list is rather long--At the time of this writing, there are over 300 rule filters in place that Tripwire compares every message against. The survivors of this filtering process are deemed worthy of notifying a human about.

    By this point, you might be asking why we do it this way...The reason is, the only thing we know when it comes to error messages, is that there's no way of knowing what they're going to look like in advance. In other words, we can't go fishing for a specific fish; we need to, instead, catch all fish, and throw back the fish we don't want; The only thing we do know is what we don't care about.

    As time goes on, and as the stack of filters becomes more and more efficient, the odds of getting a message that is both a) legitimate, and b) something we've never seen before gets smaller and smaller. After about 3 months, the only things we hear from Tripwire are nearly always legitimate, because it has become increasingly difficult for a message to survive the filtration process.

    Occasionally, we do still get messages from Tripwire containing syslog messages that we don't care about, of course.. Each email contains an URL that, when clicked, instructs Tripwire to ignore similar messages in the future. The net result is an engine that grows increasing efficient with time thanks to a little encouragement from humans.

    As of December 2013, Tripwire has parsed 2.1 billion syslog messages, and is 99.719% accurate when it comes to telling the difference between real problems and things that simply look real, but aren't. We collect all sorts of engine statistics as we go, in order to shed more light on problems when they do occur; For example, we track the engine behavior in terms of how many messages were received in the past 5 minutes, in graph form. If we see a spike in the graph indicating this number is unusually high, we have higher confidence as a team that the problem is real. We also track Tripwire's own decision making process, referring to it as a "signal to noise ratio".... Error messages that survive filtration are more likely to be legitimate if they occur within a wave of "suspect" messages. Normally, out of the 200 or so messages we recieve per second, about 1.5 messages per second catch Tripwire's attention and warrant further analysis. If the rate if incoming "suspect" messages is high, the likelyhood of messages surviving filtration being legitimate problems is also high. This sort of "confidence value" gives us an edge when dealing with the sort of on-the-spot problems that occur every so often where it's difficult to divine where within the organization the problem is occuring.

    Here's a screenshot of the web front-end we built for our engine:

    http://imgur.com/xlh4Ov6

    There are plenty of opportunities for this model to grow beyond its current form. It wouldn't be too difficult, for example, for Tripwire to do its own functional analysis on certain problems, and decide on its own whether or not they're legitimate. For example, it's not uncommon for network glitches to temporarily render a box unreachable for a moment. If one of our other monitoring tools happen to ping this box at the exact same moment, it may think the host is offline, and generate a syslog message to that effect; Tripwire can be taught to look for this pattern via a simple regex, and ping the same host on it's own, for example, to see if the problem still exists before notifying us about it.

    As a side perk, it's also kind of hilarious when Tripwire finds things wrong with our systems via syslog before the vendor-supplied monitoring tools do, or, finds things that the vendor-supplied monitoring solutions either miss, or fail to report. :)

    tl;dr - We have a mechanism in-house that parses our incoming syslog stream, looking for keywords that look like they may be problems, and filtering them against a set of exclusion rules we've provided the engine. This is as close as we can get to having a crystal ball when it comes to monitoring, and its output gives us a heads-up on events that vendor-supplied monitoring tools cant, or wont. Proactive is better than reactive!

    Cheers,
    Bowie J. Poag

Monolith: A Clever Tool For Monitoring Regularly Scheduled Tasks
1 direct reply — Read more / Contribute
by bpoag
on Dec 19, 2013 at 12:06

    Where I work, we've recently had a big push to improve and modernize our approach to systems monitoring. I thought i'd take a little time to share some of the approaches we've come up with, and how they're benefitting us.

    In most medium-to-large production environments, you generally find one or more systems that have regularly scheduled jobs that run. Cron does a nice job of this, but suffers from one fatal flaw. It's not human. :) Should one of these regularly-scheduled jobs kick off and suddenly die or otherwise fail to run at all, it's up to you, or cron, to either funnel the results of stdout/stderr to someone, or up to the script itself to generate some sort of notification in the event that it was unable to run successfully. But, what happens when things stop working, and it's only after days, weeks, or months that it's noticed by anyone? We've actually developed a third way, called Monolith, to detect when this state happens.

    Suppose you gave each of your regularly scheduled jobs the ability to call home to a centralized database. After a while, a bit of a track record would begin to develop.. maybe after 3 or 4 executions....a track record that could tell you when the next invocation of that command can be expected to show up.

    Enter Monolith. Monolith is a two-part tool. The first part, a simple call-home script, takes just one argument -- an "entity name", which usually equals the name of the script itself. When run, it makes a connection to a MySQL database, and adds a row to a table saying, "Hi! I'm {entityName} on {host}, and it's currently {time} where I am". The other part of the tool is a script that watches this database, looking for instances when an entity has stopped calling home...In other words, if you know that entity "foobar.pl" usually checks in every 200 seconds, and the last time it checked in more than 200 than seconds ago, you know that there's a problem with foobar.pl...and an alert can be generated to that effect. Incidentally, we set our detection threshold at 20%..Meaning, if something that is known to check in every 100 seconds hasn't checked in for the past 120 seconds, an alert is generated.

    Here's the call-home script:

    We have taken this idea, the ability to predict when something should have called home, but hasn't, and greatly expanded upon it. Monolith is now a status dashboard that gives near-realtime status on over 200 different entities running across about 30 different hosts. To begin monitoring anything, all it takes is adding a single line to the script you want monitored, and you're done. A more clever use would be to only call home to Monolith if the script was successful; that way, if the script ran but failed operationally for some reason, that can be detected and resolved. Anything which runs at regular intervals, and whose state can be conveyed in terms of on/off, successful/not successful, or present/not present, can be visualized.

    Here's what our front-end to Monolith looks like, in-house:

    http://i.imgur.com/kf7nYA6.png

    Our organization now has 200+ more pairs of automated eyes carefully ensuring that everything we have is working as expected, and alerting us when it's not. It's and already bared substantial fruit--On instances where something systemic had broken, it affected the ability of several scripts on several different hosts to run. It helped greatly to have a visual map of what was broken, so that we could be 100% confident that we've fixed the problem in every place.

    tl;dr - We have a tool that tracks regularly scheduled tasks to ensure they're calling home at regular intervals. When they deviate from the expected drum pattern they've created for themselves over time, or stop phoning home alltogether, we know about it immediately, versus being caught off-guard and finding out at some point down the road.

    Cheers,

    Bowie

Text Based Perl Game
7 direct replies — Read more / Contribute
by spoonman2525
on Dec 17, 2013 at 21:22

    I'm new to Perl. So, take it easy on me. I made this little game to practice.

CUDA::Minimal works again
2 direct replies — Read more / Contribute
by dcmertens
on Dec 16, 2013 at 21:58

    Hey monks!

    I've posted a chronological account of my day at blogs.perl.org. Here is the condensed version.

    I learned parallel programming, and programming on nVidia video cards, back in 2010. By early 2011, I had written a few modules to make it easer to write CUDA, using Perl mostly for the memory shuffling. Today, I brushed off the first (ExtUtils::nvcc) and I realized that the second---CUDA::Minimal---works on all versions of Perl except v5.16.

    A few months ago I concluded that CUDA::Minimal was doomed when it didn't compile on v5.16. Now that I know that it compiles on v5.18, I'm a lot happier!

Dns lookups
2 direct replies — Read more / Contribute
by perlaintdead
on Dec 11, 2013 at 02:52

    i just needed an nslookup like tool so i wrote this oneliner.

    F:\scripts>pllookup.pl perlmonks.org bing.com example.com

    use Socket;print$_,"\n"and map$a.=ord.".",split'',inet_aton$_ and chop +$a and print$a,"\n\n"and$a=""for@ARGV;
Switch between gVim and Bash on Windows via Ctrl-Z and fg
1 direct reply — Read more / Contribute
by sg
on Nov 09, 2013 at 20:50

    The script below helps move between gvim and a bash-shell on Windows using the commands Ctrl-Z and fg.

    #!/c/opt/perl64/bin/perl BEGIN {(*STDERR = *STDOUT) || die;} =head Switch between gVim and Bash on Windows via Ctrl-Z and fg --------------------------------------------------------- This script helps move between gvim and a bash-shell on Windows using the commands Ctrl-Z and fg. My work-flow involves spending almost all my time in gvim on Windows. Although I have some tools to help me remain in gvim while performing tasks outside of gvim, I occasionally have to move to a bash shell. Although there are situations -- such as those involving gvim with no file open (issue: implementation does not look for the correct gVim title) or multiple gvim or bash windows (issue: implementation does no +t offer choice among multiple windows) -- where the implementation is no +t fully effective, it is satisfactory for my needs. Usage: ----- A) The following gVim mappings: nnoremap <c-z> :silent !perl c:\opt\bin\czfg.pl 0 % %:p:h %:p:t<c +r> nnoremap <s-z> :silent !perl c:\opt\bin\czfg.pl 0 % %:p:h %:p:t r +egister<cr> assume that this script is located at c:\opt\bin\czfg.pl B) Before this script can be used, both gVim and bash windows must exi +st. C) The first time this script is used to switch between gVim and bash, it must be from gVim, and the command to be issued -- as per the ab +ove mapping -- is the normal mode command <Shift-Z>. Doing so will not only switch focus to the bash window but also define an alias f +g to get back to gVim from bash. D) Subsequent switches between gVim and bash would be via <Ctrl-Z> and + fg. Non-perl Alternatives: --------------------- 1) In Windows 7, the bash and gVim windows can be pinned to the task- bar, say, as the first and second applications respectively. Then the <Win-1> and <Win-2> activate either the bash or the gVim window +. If multiple bash or gVim windows are present, use <Ctrl-Win-1> and <Ctrl-Win-2>. 2) The console ConEmu is getting better at supporting 256 colors for console vim, but wasn't quite there as of November 3, 2013: https://groups.google.com/d/msg/conemu_ml/nThX05Kal0E/AEOkK3Px6dYJ =cut use diagnostics; use warnings; use strict; $| = 1; #----------------------------------------------------- use Win32; use Win32::GuiTest qw( SetForegroundWindow GetChildWindows GetWindowText SendKeys SetFocus ShowWindow SW_RESTORE EnableWindow ); defined $ARGV[ 0 ] or exit; my $win_id = $ARGV[ 0 ]; my $buf_str = $ARGV[ 1 ]; my $win_str_p = $ARGV[ 2 ]; my $win_str_f = $ARGV[ 3 ]; my $register = $ARGV[ 4 ]; # if called via fg in bash, go back to gVim and exit if( ! $buf_str ) { SetFocus( $win_id ); ShowWindow( $win_id, SW_RESTORE ); EnableWindow( $win_id, 1 ); SetForegroundWindow( $win_id ); exit; } # if here, script has been called from gVim # gVim window name has path within brackets $win_str_p = "($win_str_p)"; # For use in pattern matching, escape '\', '(' and ')'. $win_str_p =~ s,\\,\\\\,g; $win_str_p =~ s,\(,\\(,; $win_str_p =~ s,\),\\),; # clean up buffer paths that are relative to local directory $buf_str =~ s,^\.,,; $buf_str =~ s,^\\,,; $buf_str =~ s,^/,,; # For files in local directory, don't use path component for gVim wind +ow name ( $buf_str !~ /\\/ ) and ( $buf_str !~ /\// ) and $win_str_p = ''; my $go_back_to = ''; my $go_to = ''; # undocumented feature: user can specify target of fg command $win_id and $go_back_to = $win_id; # Get id of gVim and bash windows by iterating through all windows for my $a_window ( GetChildWindows(0) ) { my $foo = GetWindowText($a_window); # get first match to gVim window title if( ( ! $go_back_to ) and ( $foo =~ m/gvim/i ) and ( $foo =~ m/$win_str_f/i ) and ( $foo =~ m/$win_str_p/i ) ) # empty string will match { $go_back_to = $a_window; } # get first match to bash window title if( ( ! $go_to ) and ( $foo eq 'bash' ) ) # reset any messed up title via: echo -n +e "\033]0;bash\007" { $go_to = $a_window; } } # bring the bash window forward SetFocus( $go_to ); ShowWindow( $go_to, SW_RESTORE ); EnableWindow( $go_to, 1 ); SetForegroundWindow( $go_to ); # Done unless user has requested that the alias fg be created $register and ( $register eq 'register' ) or exit; # If here, need to build and issue the command to create the alias fg # Assuming script is not in path, get full path to script my $script = Cwd::abs_path( $0 ); # didn't work: File::Spec->rel2abs( +$0 ); # Convert '\' to '/' for use in bash $script =~ s,\\,/,g; # weird: have seen Cwd::abs_path use both '\' and + '/' # create the alias fg SendKeys("alias fg=\'$script $go_back_to\'\n"); __END__
Cstracker: a graphic window for Csound unified format files
No replies — Read more | Post response
by emilbarton
on Nov 02, 2013 at 09:58

    Cstracker is a new Perl tool for Csound users. The installation does not require heavy Perl module installation, except for some Tk modules that could require Cpan installation. I took my inspiration from Opcode's Vision, a software I used in the nineties. Of course notes cannot be arranged by frequencies, or displayed on music staves, because Csound parameter design doesn't allow such generalization. However, notes are presented graphically by section, then by instrument, on canvas showing their relative position in time. Once clicked, each note pops up a parameter window where it can then be modified, muted or deleted. Each note can be given a name and color different than the default ones. New notes can be created by right-clicking on free position in the canvas.

    When all the modifications are made, the resulting score can replace the original CSD or not. Tracker data will be saved into the score by short tags put at the end of the line (a comment following any other user-comment). Cstracker launches Csound to play its output. Here Windows user may have to check the terminal sequence, as it has not been tested yet on this platform. Cstracker is fast and faster than editing a score line by line.

    In the future, a selection of notes shall be also modifiable in one shot by name, color, position in time, and even quantized. I'd be delighted if anyone proposed to contribute to these improvements (summarized now in inactive options of the track submenu) or to any other aspect of this script.

    At present Cstracker seems to respect most of the CSD files it is given to treat. A shortcoming I've met, concerns commented lines that appear within the score, if they are commented i-statements they are processed as muted notes correctly but if they only contain comments inserted here and there, then they are pushed at the end of the section's score. Another weakness: Cstracker doesn't create a CSD file from scratch, one has to start with an existing score because the software doesn't treat orchestra code at all.

    Cstracker exists as a standalone and as a Perl module in Csgrouper that can - under certain precautions - extend existing Perl code for Csound music.

    The standalone can be downloaded on Github: https://github.com/emilbarton/Cstracker.

    Thank you to notify any case where Csound statements aren't treated correctly or other bug.

    message cross-posted on the Csound mailing list, Linuxmusicians.com and Perlmonks.org

A regular expression that groks UUIDs
No replies — Read more | Post response
by fbicknel
on Nov 01, 2013 at 13:55
    m/^[[:xdigit:]]{8}-(?:[[:xdigit:]]{4}-){3}[[:xdigit:]]{12}$/

    Humbly submitted.

    Fun activity: add groupings to parse bits (words) of it out if you're so inclined. (See RFC 4122 for the bits' meanings. Also likely better done through unpack, but I warned you this was a fun activity.)
    ($time_low) = $uuid =~ m/^([[:xdigit:]]{8})-(?:[[:xdigit:]]{4}-){3}[[: +xdigit:]]{12}$/
Extract inline script from an XHTML with XML::Twig
2 direct replies — Read more / Contribute
by ambrus
on Oct 30, 2013 at 08:15

    In the chatterbox, SagaraSouske has asked help about extracting text from script elements in an XHTML document. He wrote code using HTML::TreeBuilder. This node shows how to do the equivalent with XML::Twig. The XHTML example is directly from SagaraSouske.

    #!perl use 5.014; use XML::Twig; my $twig = XML::Twig->new; $twig->parse(xmlinput()); for my $tr_elt ($twig->findnodes(q(//tr[@class='Odd']))) { if (my($script_elt) = $tr_elt->findnodes(q(td[1]/script))) { say "Script: ", $script_elt->text; } if (my($td2_elt) = $tr_elt->findnodes(q(td[2]))) { say "Other: ", $td2_elt->text; } } sub xmlinput { q{ <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http:/ +/www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html lang="en" xmlns="http://www.w3.org/1999/xhtml"> <body> <table border="0" cellpadding="4" cellspacing="0" class="DataGrid" wid +th="1000px"> <tr class="Odd"><td><script type="text/javascript">Decode("%31%31%39%2 +e%32%35%33%2e%36%31%2e%31%32%31")</script></td><td>Other Data</td></t +r> <tr class="Even"><td><script type="text/javascript">Decode("%32%30%33% +2e%31%35%36%2e%32%30%37%2e%32%34%39")</script></td><td>Other Data</td +></tr> <tr class="Odd"><td><script type="text/javascript">Decode("%32%32%32%2 +e%36%32%2e%32%30%37%2e%37%30")</script></td><td>Other Data</td></tr> <tr class="Even"><td><script type="text/javascript">Decode("%32%30%32% +2e%31%31%32%2e%31%31%37%2e%39%34")</script></td><td>Other Data</td></ +tr> <tr class="Odd"><td><script type="text/javascript">Decode("%35%38%2e%3 +2%30%2e%32%32%38%2e%32%32")</script></td><td>Other Data</td></tr> <tr class="Even"><td><script type="text/javascript">Decode("%31%31%39% +2e%32%35%33%2e%36%31%2e%31%32%30")</script></td><td>Other Data</td></ +tr> <tr class="Odd"><td><script type="text/javascript">Decode("%32%32%33%2 +e%38%37%2e%31%39%2e%35")</script></td><td>Other Data</td></tr> </table> </body> </html> }; } __END__

    Update: here's the output:

    Script: Decode("%31%31%39%2e%32%35%33%2e%36%31%2e%31%32%31") Other: Other Data Script: Decode("%32%32%32%2e%36%32%2e%32%30%37%2e%37%30") Other: Other Data Script: Decode("%35%38%2e%32%30%2e%32%32%38%2e%32%32") Other: Other Data Script: Decode("%32%32%33%2e%38%37%2e%31%39%2e%35") Other: Other Data
Multiplexing log output: Log4perl of the poors
No replies — Read more | Post response
by Discipulus
on Oct 21, 2013 at 03:03
    The 'Cool users for Perl' is a little intimidating for this snippet..
    I was writing a program with the strictly use of core modules only and i was thinking to add the log functionality.

    I looked at Log4Perl and wow... what a suit of features!I decided to try replicate some features: first the multiplexing of the output.

    To use this you need to declare two hashes: the first is for handlers. It contains as key as you wont. Every key contains a three elements array: the glob of an already opened filehandle, the error level for this handler, and an anounymous sub to compose the final logline for this handler. Theese subs will receive two elements: the level and the message (ERROR, 'Cannot read').

    The second hash is a dispatch table that merely filter unwanted message for a particular handler.

    The small sub do an ugly cut on the incoming message and call some code for each handler defined.

    As good side note you can change the level of an handler at runtime.

    Comments and improvement welcome.

    #!perl use strict; use warnings; $|++; # open some FH you'll use, handler 0 now is the already opened STDOUT open (LOG, '>','log-multiple-output.log') || die; open (BIGLOG, '>>','biglog.log') || die; # handlers: GLOB, LEVEL, COMPOSITION SUB receiving $lvl, $msg my %loghdl = ( 0 => [ *STDOUT, 'ERROR', sub{ return $_[0]."> $_[1]\n"}, ], 1 => [ *LOG, 'INFO', sub{my(undef, undef, $line) = caller(2); return $_[0].">".(scalar localtime(time)).' "'.$_[1].'" lin +e '.$line."\n"}], 2 => [ *BIGLOG, 'ERROR', sub{my(undef, undef, $line) = caller(2); return $_[0].">".(scalar localtime(time)).' "'.$_[1].'" lin +e '.$line."\n"}], ); # the filters declaration my %wanted =( DEBUG => sub { my ($to,$dbglvl,$action) = (shift,shift,s +hift); grep (/$dbglvl/,qw(DEBUG)) ? print $to $ac +tion->(@_) : 0; }, INFO => sub { my ($to,$dbglvl,$action) = (shift,shift,sh +ift); grep (/$dbglvl/,qw(DEBUG INFO)) ? print $t +o $action->(@_) : 0; }, WARNING => sub { my ($to,$dbglvl,$action) = (shift,shift,sh +ift); grep (/$dbglvl/,qw(DEBUG INFO WARNING)) ? +print $to $action->(@_) : 0; }, ERROR => sub { my ($to,$dbglvl,$action) = (shift,shift,s +hift); grep (/$dbglvl/,qw(DEBUG INFO WARNING ERRO +R)) ? print $to $action->(@_) : 0; }, FATAL => sub { my ($to,$dbglvl,$action) = (shift,shift,s +hift); grep (/$dbglvl/,qw(DEBUG INFO WARNING ERRO +R FATAL)) ? print $to $action->(@_) : 0; }, ); ## the sub cut the head of the incoming string sub ulog { my $msg = shift; chomp $msg; (my $cmd = $msg)=~s/\s+.*//g; $msg=~s/^$cmd\s+//; $cmd = uc $cmd; foreach my $hdl (sort keys %loghdl) { exists $wanted{$cmd} ? $wanted{$cmd}->( @{$loghdl{$hdl}},$cmd,$msg) : print {$loghdl{$hdl}->[0]} 'Unknown logevel>'.lc ($cmd).' '.( +lc ($cmd) eq $msg ? '' : $msg)."\n"; } } #EXAMPLE of use ulog 'Eccolo qui'; ulog ('DEBUG debbuuuugga'); ulog ('Debug debbuuuugga'); ulog ('INFO infohere'); ulog ('WARNING warn!! you are not authorized'); ulog ('ERROR unable to read'); ulog ('FATAL cannot find Perl..'); print "\nchanging lvl to debug..\n\n"; $loghdl{0}->[1]='DEBUG'; ulog 'eccolo qui'; ulog ('DEBUG debbuuuugga'); ulog ('debug debbuuuugga'); ulog ('INFO infohere'); ulog ('WARNING warn!! you are not authorized'); ulog ('ERROR unable to read'); ulog ('FATAL cannot find Perl..');
    I hope someone can find this useful.

    L*
    There are no rules, there are no thumbs..
    Reinvent the wheel, then learn The Wheel; may be one day you reinvent one of THE WHEELS.

Add your CUFP
Title:
CUFP:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":


  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • Outside of code tags, you may need to use entities for some characters:
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.
  • Log In?
    Username:
    Password:

    What's my password?
    Create A New User
    Chatterbox?
    and the web crawler heard nothing...

    How do I use this? | Other CB clients
    Other Users?
    Others scrutinizing the Monastery: (12)
    As of 2014-07-25 09:25 GMT
    Sections?
    Information?
    Find Nodes?
    Leftovers?
      Voting Booth?

      My favorite superfluous repetitious redundant duplicative phrase is:









      Results (170 votes), past polls