Beefy Boxes and Bandwidth Generously Provided by pair Networks
more useful options


( #480=superdoc: print w/replies, xml ) Need Help??

If you've discovered something amazing about Perl that you just need to share with everyone, this is the right place.

This section is also used for non-question discussions about Perl, and for any discussions that are not specifically programming related. For example, if you want to share or discuss opinions on hacker culture, the job market, or Perl 6 development, this is the place. (Note, however, that discussions about the PerlMonks web site belong in PerlMonks Discussion.)

Meditations is sometimes used as a sounding-board — a place to post initial drafts of perl tutorials, code modules, book reviews, articles, quizzes, etc. — so that the author can benefit from the collective insight of the monks before publishing the finished item to its proper place (be it Tutorials, Cool Uses for Perl, Reviews, or whatever). If you do this, it is generally considered appropriate to prefix your node title with "RFC:" (for "request for comments").

User Meditations
Do you like Perl?
4 direct replies — Read more / Contribute
by choroba
on Feb 13, 2018 at 15:21
    Do you like Perl? Do you count yourself among people?

    If both your answers are "Yes", you might want to add your reasons to the discussion Why do people like Perl? on

    I guess we've had similar threads here over the years, but talking to a broader auditorium can be different.

    ($q=q:Sq=~/;[c](.)(.)/;chr(-||-|5+lengthSq)`"S|oS2"`map{chr |+ord }map{substrSq`S_+|`|}3E|-|`7**2-3:)=~y+S|`+$1,++print+eval$q,q,a,
RFC Win32::Event2Log ..gimme back my logfiles
2 direct replies — Read more / Contribute
by Discipulus
on Jan 31, 2018 at 07:13
    Hello monks,


    I haved produced my first, at least in my intention, serious module: Win32::Event2Log for the moment on github (current version). I tried to follow all best practices for module creation (a long read..) and I announced on prepan last week but I had no comment back.

    The windows Event Viewer, in my experience, it's good just to lead you to a carpal tunnel syndrome so in the past I have arranged a bounch of Perl programs to inspect it's registries using Win32::EventLog to trigger some action. This approach it's difficult and everytime I had to restart from scratch. So I had the, cool, idea to write an engine that read events and, if a given rule matches, write them on a plain logfile, then the road it's plain for a Perl programmer.


    Essentially what the module do, as it is explained in it's POD, is using Win32::EventLog and parsing windows events and writing them to plain logfiles. This module is rule based: a rule it's a minimal set of conditions to be met to write an entry to a logfile. You must add valid rules before starting the engine. Once started, the engine will check events every x seconds (specified using interval argument) and for every registry (System, Application, Security, Installation or a user defined one) that is requested at least in one rule will check for an event's source specified and optionally for some text contained in the event's description.

    The resulting engine it's designed to survive to shutdowns and user's interruption issued with CTRL-C in the console or a kill of the PID: next run of the program will read just unparsed events on. This is achieved storing numbers of last event read (for each registry) in a file specified with the lastreadfile argument.

    A simple example of it's usage is (as in the example section of the module) is the following:

    use strict; use warnings; use Win32::Event2Log; my $main_log = $0.'.mainlog.log'; my $last_numbers_log = $0.'.last_numbers.log'; my $sys_errors_log = $0.'.System_err_warn.log'; my $engine = Win32::Event2Log->new( interval => 60, endtime => 0, mainlog => $main_log, verbosity => 2, lastreadfile=> $last_numbers_log, + ); $engine->add_rule ( registry => 'System', eventtype=> 'error|warning', source => qr/./, log => $sys_errors_log, name => 'System errors and warnings', + ); $engine->start;

    But since I've always produced modules as private containers of almost related functions, I'm a bit a newbie in regard to CPAN standards. Infact I plan to release it on CPAN soon, but not before having listen your advices. So my Request for Comments are:


    1) name:
    I think the Win32 is naturally the correct one but what about Event2Log ? it seemed the best choice for me

    2) testing:
    in this field I read a lot in the past but, my sin, practiced almost no times.. I've done my best writing 01-basic.t (here(current version)). How the test can be improved? I need to bail out in the test if $^O is not MSWIn32? I tested only the public methods I offer: should I test also private functions?

    3) design and enanchemts:
    Even if the module runs well enough in my tests on various scenarios, I already plan to modify it. Infact actually the core of the engine is a while (1) {.. loop where new events are checked and rules applied (you can see it here(current version)).

    I plan to abstract the reading part, maybe adding a Win32::Event2Log::Reader submodule. Infact I want also the user to choose if use Win32::EventLog as reader or a wrapper around wevtutil.exe that I plan to write soon. How achieve this? Having Win32::Event2Log::Reader using Win32::EventLog by default and Win32::Event2Log::Reader::Wevtutil subclassing Win32::Event2Log::Reader ? What is the cleanest design for such modification? Which tests I must add?

    4) design of an eventual Win32::Event2Log::Reader :
    This seemed to me a good use for an iterator: $reader->next will replace a lot of odd code in my current module. The fact I'm wondering about is for the wrapper around the system call wevtutil.exe

    Since system calls are expensive I plan the first time the iterator it's initialized, to query all previous events and return them one at time: the array of events this first time can be many Mb and in successive calls possibly just few bytes. This seems against the good design of a ligth sized iterator. It's justificable to avoid possibly many system calls?

    Thanks for reading.


    There are no rules, there are no thumbs..
    Reinvent the wheel, then learn The Wheel; may be one day you reinvent one of THE WHEELS.
RFC: Tutorial for "Using Google Cloud Shell with Perl"
2 direct replies — Read more / Contribute
by Corion
on Jan 28, 2018 at 09:34

    Why would you want to?

    Maybe you are just starting out with Perl and don't have a computer set up for Perl. Unix computers already include Perl, but maybe you don't have the permissions to run it on your computer.

    Maybe you just want to try a Perl module for some external program that you don't want to install on your machine. Maybe you just got a bug report for Linux that you can't easily replicate. Maybe you are online and don't have access to your home machine. Maybe you just need 5GB of storage quickly.

    You just need four steps to get to Perl in the Google Cloud Shell:

    1. Log in with your Google credentials

      That's how you pay for it - with information about yourself. Google will monitor what programs you invoke but not the command line parameters. In return, you get 5GB of permanent storage and a 2GB RAM virtual machine that includes Perl 5.24, other programming languages, the Google Cloud SDKs and other stuff.

    2. Set up CPAN to use local::lib

      Run the cpan command to perform the initial setup:


      There, you need to answer two questions:

      • Choose the quick, no questions asked setup
      • Choose the proposed local::lib method of installing modules
    3. Install some stuff that you maybe want to try out

      Upgrade Test::More, because the Debian stock 1.01 version causes some spurious test failures
      cpan Test::More cpan App::cpanminus Moo Future::AsyncAwait DBD::SQLite
    4. Enjoy

    More documentation on the Cloud Shell

    Using the Cloud Shell as a web development environment

    The cloud shell also comes with an included web proxy so that you (and only you) can try out web applications served from any web server on that machine. This makes the Cloud Shell a convenient testbed to try out web frameworks like Mojolicious, Dancer2, Dancer or even CGI::Application in PSGI mode.

    Using Mojolicious

    Install Mojolicious

    cpan Mojolicious

    Run minimal Mojolicious program:

    perl -Mojo -E 'a("/hello" => {text => "Hello Mojo!"})->start' daemon - +l

    Visit /hello in the Web Preview pane

    Using Dancer2

    Install Dancer2

    cpan Dancer2

    Run minimal Dancer2 program:

    perl -MDancer2 -e 'set port => 8080; get "/" => sub { "<i>Just</i> Ano +ther <b>Perl</b> <u>Hacker</u>," }; dance'

    Using Dancer with Twiggy

    Install Dancer and Twiggy

    cpan Dancer Twiggy

    Run minimal Dancer program:

    plackup -e 'use Dancer; get "/hello/:name" => sub { return "Why, hello + there " . param("name"); }; dance;' --port 8080 -s Twiggy

    Visit /hello/yourname in the Web Preview pane

"Your code sucks"
2 direct replies — Read more / Contribute
by afoken
on Jan 28, 2018 at 08:04

    Being paid for being an asshole

    "Your code sucks." I've said that more than once, some times quite literally, more recently, I tend to wrap it in a minimal bit of politeness. And much more often, I say "This [3rd party] code sucks". In fact, saying "your code sucks" and even "your design sucks" is part of my work.


    Well, actually not.

    Being paid for being a beancounter

    At work, we write code for our embedded systems that work in industrial, medical and aerospace environments. Some of our systems are quite harmless, less dangerous than a lamp. To cause damage or to harm people, you would have to throw the systems at people. But most systems have real-time requirements, control potentially dangerous machines or oxygen supplies, or similar stuff. So errors may cause real damage, hurt or kill people. One way to reduce risks is to do peer reviews, starting way before we even think about writing code. Of course, code is also peer reviewed in nearly all of our projects. We are quite used to poke in other people's code, search weak points, and do bean counting. It improves not only our products, but also the way we write code.

    Saying "your code completely sucks" is extremely rare. In fact, most times, it's the little details. Last minute changes in the code, hastily and/or interrupted, leaving a little bit of mess. Misleading names, documentation that was not updated to match changed code, left-over comments from previous iterations, a missing case in a switch, ignoring the style guide, you name it. At the end of a peer review, we have a list of problems in the code, and usually, author and reviewer agree without discussion that and how those problems have to be fixed. Sometimes, the author has to justify why and how (s)he has written a piece of code, and that this way is in fact correct. In those cases, the usual problem is lack of comments and/or documentation in the code.

    Impedance mismatch

    "Your code sucks" does not mean "you suck".

    A while ago, we had a project that has grown too much for our little team, so we decided to subcontract a little, quite independent part of the project to an external developer. We drafted a minimal requirements list and an interface specification, added our style guide, and had a meeting with the external developer. We gave him a suitable development board, hacked to the point that relevant parts were similar to the real product, a lot of ready-to-use hardware drivers, and waited for him to come back with working code.

    A second aspect of this approach was to search for someone who could help us in future projects by taking over parts of the development in busy times.

    What came back was a big mess of spaghetti code, completely ignoring the style guide, lacking documentation, and hardly working at all. A classic case for "your code sucks big time", but let's face it: If you search an external developer for long-term relations, you try to be positive and helpful: "Look, we need the code to match our style guide. That's written in the contract with our customer. We need documentation, and it has at least to compile on the target CPU. Yes, your dev board has a different CPU. Use #define and #ifdef instead of hardcoding. Compile for our target, even if you can't run that on the CPU we gave you. Do this, add that, remove those, don't copy and paste, use functions, bla bla bla. This is how to use doxygen: Just add an extra asterisk at the start of the comment, bla bla bla."

    Wash, rinse, repeat. The next iteration still sucked. And so did the third one. My written response to the fifth or sixth iteration was (not literally!) "your code sucks". I explained that every iteration took me more than an hour just to make it compile. I explained that we agreed on the expected behaviour of the code, but the code did not show that behaviour. That the behaviour and the form of his code were not acceptable. And that he was hired to save us time, not to cost us time.

    Half an hour later, my boss came around, telling me that the external developer has cancelled the contract because of my mail. The external developer has read it as "you suck". Well ...

    We agreed that my mail was not very polite, but also that the entire mail (and all previous ones) just criticized the code and documentation. My boss phoned him, and discussed more than an hour. They finally agreed on a final day in our office for handing over the code and make it run on the target system. The external developer worked with me on my computer, and we made his part of the software work on the target and added a lot of documentation.

    End of the story: We had the required part of software, in a state that worked, but was still ugly. We didn't change much after that day, and so that part is still ugly. It works, and I would like to clean up the last dirty corners, but it's not worth the time. Oh, and that external developer won't be hired for new jobs.

    You are too academic

    In a previous job in a medical environment, I had to write an interface between an existing piece of software and a new laboratory machine that replaced an older one. The machine reported its data via RS232, and a simple external program wrote the data into a file on a file server. So I copied the old driver into a new file and tried to make sense of the existing code.

    The system was written in what was originally a subset of C, but has evolved into some mix of the Hunchback of Notre-Dame, Gollum, and Salvatore from "The Name of the Rose". Not quite ideal conditions for writing safe code for experienced developers, but usable. Unfortunately, the software was written by a salesman that originally just sold the IDE for Hunchback-Gollum-Salvatore (HGS - not the real name, of course). He was hired to use HGS to write that medical software, ignoring all rules for developing medical software, and bypassing the in-house IT department. He had no idea how to write software, he had no idea how to plan his time, and gave unreasonable promises of what the software would be able to do in no time. To make things even worse, a research diver was hired to help him developing the software.

    I was hired to replace the salesman-developer.

    So I read the driver code. It was just a single function. 1500 or 2000 lines of code in a single function. Some parts were copied five or six times instead of moving them to functions. And I found errors. Many errors. Obvious errors. Errors that no sane developer would make. Well, I was new on the job, and I was not sure if I understood all of HGS. So I RTFM, twice to be sure. I found that HGS documented that comments are "like in C". In C, comments don't nest. In the HGS compiler, comments don't nest. But in the editor of the HGS IDE, they do nest. So you end up with code that looks like it is commented out, but it is not. The compiler happily compiles it, and the runtime executes what looks like a comment in the IDE. I found that fopen returns a handle, or 0 on error. But alas, there is no way to find out what error has happened. Permissions, lack of privileges, locking, network error, non-existing file? You just get a 0 back from fopen(). There is no errno. No try-catch. No exceptions. And I found at least four more bugs in HGS itself.

    Back to the driver code. There were errors about every 10 lines of code, and they were real errors, even in HGS. I looked at some of the other code, and found about the same error rate. So used a little bit of perl to simply extract all of the code that the salesman and the diver had written, and made perl count the lines of code. Then I multiplied the lines of code by the error rate from the driver code. Tens of thousands of potential errors in a medical software does not sound sane, does it?

    At the next management meeting, I raised the issue. I explicitly stated that the total number of errors was a rough estimate, that may be too high by a factor of 10 if we were lucky. But even then, thousands of potential errors would remain in software were a single error could lead to severe medical complications in an emergency situation. I recommended to rewrite the software from scratch, because the existing code base was in a horrible state and HGS does not help improve the situation.

    I was told by the managing director that I was "too academic". Well, if you prefer having the software help kill people ...

    A merger changed a lot of priorities, and so that piece of crap was assigned to someone else, in a different federal state. I was quite happy with that decision, and even more when I heard that they had decided to outsource that project and have it reworked.

    About two year later, the new old software was presented to the management, the IT and the laboratory teams. I sat in the rear corner, not really interested in that management show. "Look, new shiny buttons that look and work exactly like the old ones." But then someone from the laboratory team asked how much of the scary salesman code has survived. The presenter smiled. "We have removed almost all of that crap. It was actually easier just to start from zero than to fix the code. The software still looks and feels the same, but the errors are gone." I could not help smiling from ear to ear. The laboratory manager noticed that, raised his hand and said: "Look at Alexander, watch him smile. He told you to do exactly that two years ago."

    Well, not exactly. I recommended to get rid of HGS, but the company that reworked the software was a HGS shop, so HGS stayed. I never read any line of the new code, but I'm sure that even the new code will have race conditions and will have trouble coping with I/O errors, because it is very hard to avoid race conditions in HGS, and it is nearly impossible to do sane error handling in HGS.


    Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
Singleton Design Patterns and Naming Things
2 direct replies — Read more / Contribute
by papidave
on Jan 24, 2018 at 23:02
    There are two hard problems in computer science: cache invalidation and naming things*. Questions about naming things, for me, show up the most when designing a new class (and I wrap nearly all my code into a class, these days). Since I used to earn my rent by writing code in C++, my method names in Perl tend to reflect that heritage.

    The most obvious example of this pattern is the way I name object constructors. If I have a package, Xyzzy, the constructor for that class is usually called Xyzzy::new. When the initialization of an object is expensive, I would wrap the constructor in a singleton design pattern, and call that method new. A simplified implementation of this constructor might look like the following:

    This design pattern allowed me to conceal the singleton nature of a Xyzzy, and I used to think that was a good thing.

    Recently, however, the needs of my job called for me to write a substantial quantity of code in the Programming Language That Shall Not Be Named. That language was written with a philosophy that directly opposes TMTOWTDI Ė for any task you want to perform, there is One True Way you must do it. It is a philosophy that complicates the implementation of simple one liners, but greatly reduces, I suspect, the time spent grading test questions which must be answered by writing code in that language.

    One of the True Way conflicts I encountered while working in this programming language was the implementation of singleton constructors. You cannot choose a different name for your constructor, and the memory allocation of the object is done externally before your constructor code gets invoked. In short, there is simply no way you can override the constructor with a singleton allocator, and any class method that implements a singleton design pattern must be explicitly invoked by the caller. This leaves me with something like

    Having switched to this new nomenclature, I find that singleton instances are only reused when I want them to be used. True, thatís almost always, but this naming technique does leave me the option of constructing a new object instance if I wanted to do something ugly to it and didnít want to risk polluting the cache. On the other hand, if I want to use this pattern on a class that is already widely used, I have to go on a global search-and-destroy mission, replacing constructor calls with calls to instance(), if I want to benefit from the performance improvement that comes from using a singleton.

    These days, I still find myself banging my head on the desk when none of the four different ways I might solve a problem in Perl can be applied in The Other Language, but I think this one particular technique is beginning to grow on me. And Iím glad Perl follows TMTOWTDI; it allows me to bring back these new techniques back into my regular job, and benefit from them here, as well.

    *Also, off-by-one errors. But if I had said that up above, someone may have accused me of being swayed by that Other Programming Language into parroting the Spanish Inquisition sketch Ė and that is a dead parrot.

Filtering methods in Perl debugger
No replies — Read more | Post response
by choroba
on Jan 24, 2018 at 06:57
    [arc444]: hi. Anyone know how to filter the list of methods returned when querying an object in the perl debugger ?
    [arc444]: for example : m $obj | grep blah

    Reading help in the debugger hasn't shown anything related; reading confirmed such a feature didn't exist. So, the only way was to patch the debugger.

    I didn't have much time, so the solution is ugly: the syntax for classes is different to the syntax for objects. For objects, you have to use strings and separate the argument by a comma, for classes, the comma is forbidden and the regex is specified directly without quotes:

    $ perl -d -e 'sub foo { "bar" }; $o = bless {}, "main"' Loading DB routines from version 1.39_10 Editor support available. Enter h or 'h h' for help, or 'man perldebug' for more help. main::(-e:1): sub foo { "bar" }; $o = bless {}, "main" DB<1> m $o foo via UNIVERSAL: DOES via UNIVERSAL: VERSION via UNIVERSAL: can via UNIVERSAL: isa DB<2> m $o, '(?i:o)' foo via UNIVERSAL: DOES via UNIVERSAL: VERSION DB<3> m main foo via UNIVERSAL: DOES via UNIVERSAL: VERSION via UNIVERSAL: can via UNIVERSAL: isa DB<4> m main (?i:o) foo via UNIVERSAL: DOES via UNIVERSAL: VERSION

    And here's the patch:

    ($q=q:Sq=~/;[c](.)(.)/;chr(-||-|5+lengthSq)`"S|oS2"`map{chr |+ord }map{substrSq`S_+|`|}3E|-|`7**2-3:)=~y+S|`+$1,++print+eval$q,q,a,
Forgetfulness and 6-months-from-now-you
1 direct reply — Read more / Contribute
by oakbox
on Jan 12, 2018 at 05:51
    Can anyone explain to me WHY I must constantly forget and relearn that:
    my $sth = $dbh->prepare("SELECT * FROM Something"); $sth->execute(); if($sth->err()){ die $sth->errstr(); }
    is functionally equivalent to
    my $sth = $dbh->prepare("SELECT * FROM Something"); $sth->execute(); if($dbh->err()){ die $dbh->errstr(); }

    Because every couple of years I will become frustrated with typing out the error check on the various statement handles.
    And then look into how I can make that easier.
    And RE-LEARN that I could have been checking the database handle and just copy/pasting that over and over.

    So frustrating, a real smack-to-the-forehead kind of moment.

    And I think it ties in with why I write code the way I do. I WRITE IT OUT. I take the time to format and indent and line up the code because I have learned 6-months-in-the-future-me will really appreciate it and I don't want him cussing at me, retroactively.

    Just needed to vent a little bit, please return to your regularly scheduled activities.

Repeating a substitution
4 direct replies — Read more / Contribute
by choroba
on Jan 09, 2018 at 05:39
    Inspired by Stack Overflow, again.

    A user asked for a (awk or similar) one-liner that would replace a separator by a different one in a file, but only the first N separators should be replaced.

    For small Ns, it's easiest to repeat the substitution:

    perl -pe 's/,/|/;s/,/|/;s/,/|/'

    But, what should one do when they want to replace the first 10 separators?

    My first idea was to use a for loop:

    perl -pe 's/,/|/ for 1 .. 10' # Oops!

    Unfortunately, it doesn't work, as the for creates another local $_ and the substitution happens to the numbers, not the input.

    So, my next idea was to use a counter with /e:

    perl -pe 's/,/$i++<10 ? "|" : ","/ge'

    It works, but is ugly and hard to explain to someone not familiar with Perl.

    Another way is to tie the two $_ variables together by aliasing the outer $_ by the inner one:

    perl -pe 's/,/|/ for $_, $_'
    Again, this works only for small number of substitutions.

    But, we can generalize a list of the same things: we can use the x operator in list context! It's short, readable, and follows the DRY principle:

    perl -pe 's/,/|/ for ($_) x 10'

    ($q=q:Sq=~/;[c](.)(.)/;chr(-||-|5+lengthSq)`"S|oS2"`map{chr |+ord }map{substrSq`S_+|`|}3E|-|`7**2-3:)=~y+S|`+$1,++print+eval$q,q,a,
To glob or not to glob
7 direct replies — Read more / Contribute
by haukex
on Jan 07, 2018 at 07:54

    I am often torn when it comes to recommending Perl's glob (aka the <...> operator, when it isn't readline). On the one hand, it's built in and often shortens code, on the other, it has several caveats one should be aware of.

    1. glob does not list filenames beginning with a dot by default. For someone coming from a unixish shell, this might make perfect sense, but for someone coming from, for example, a readdir implementation, this might be surprising, and so it should at least be mentioned.

    2. Probably the biggest problem I see with glob is variables interpolated into the pattern. The default glob splits its argument on whitespace, which means that, for example, glob("$dir/*.log") is a problem when $dir is 'c:/program files/foo'. This can be avoided by doing use File::Glob ':bsd_glob'; (Update: except on Perls before v5.16, please see Tux's reply and below for alternatives), but that doesn't help with the next problem:

    3. If a variable interpolated into the pattern contains glob metacharacters (\[]{}*?~), this will cause unexpected results for anyone not aware of this list and expecting the characters to be taken literally.

    4. Lastly, File::Glob can override glob globally. If, for example, you use it in a module, and someone else overrides the default glob, then suddenly your code might not behave the way you expected.

    5. <update> glob in scalar context with a variable pattern also suffers from surprising behavior, as choroba pointed out in his reply - thank you! (additional info) </update>

    That's why I think advising the use of glob without mentioning the caveats is potentially problematic. Perhaps one wants to create a backup of a folder and don't want to miss any files, say for example, .htaccess? And I also often see things like glob("$dir/*") going without comment.

    Personally I find readdir, in combination with some of the functions from File::Spec, to be a decent, if slightly complicated, tool (one example). One better alternative among several is children from Path::Class::Dir, or methods from one of the other modules like Path::Tiny. (Modules like File::Find::Rule often get mentioned as alternatives, except that those of course recurse into subdirectories by default.)

    use Path::Class qw/dir/; my @files = dir('foo', 'bar quz', 'baz')->children; # @files includes .dot files, but not . and .. # and its elements are Path::Class objects print "<$_>\n" for @files;

    Now, of course this isn't to say glob is all bad, I've certainly used and recommended it plenty of times. If one has read all of its documentation, including File::Glob, and is aware of all the caveats, and especially if one is using fixed strings for the patterns, it can be perfectly fine. But I still think it should not be blindly used or recommended.

Using constants as hash keys
3 direct replies — Read more / Contribute
by choroba
on Jan 04, 2018 at 14:03
    Context: a StackOverflow question on how to use constants as hash keys.

    Note that the module itself mentions the following:

    For example, you can't say $hash{CONSTANT} because CONSTANT will be interpreted as a string. Use $hash{CONSTANT()} or $hash{+CONSTANT} to prevent the bareword quoting mechanism from kicking in. Similarly, since the => operator quotes a bareword immediately to its left, you have to say CONSTANT() => 'value' (or simply use a comma in place of the big arrow) instead of CONSTANT => 'value'.

    The OP used &CONSTANT => 'value' which works but doesn't inline the constant (i.e. expand it during compile time).

    ikegami pointed me to a different way which I found more pleasing than CONSTANT() which, as he rightly noted, leaks the internal implementation of constants.

    use constant A => 12; my %hash = ( (A) => 'twelve' ); # beautiful

    I wanted to verify it behaves exactly the same, so I tried running it through B::Deparse, B::Terse, and B::Concise.

    m=Deparse diff <(perl -MO=$m -e 'use constant A => 12; my %hash = ( A() => "twel +ve")') \ <(perl -MO=$m -e 'use constant A => 12; my %hash = ( (A) => "twel +ve")')
    says the code is identical.

    Terse needs some tweaking to skip the pointers:

    m=Terse diff <(perl -MO=$m -e 'use constant A => 12; my %hash = ( A() => "twel +ve")' \ | perl -pe 's/0x\w+/X/g') \ <(perl -MO=$m -e 'use constant A => 12; my %hash = ( (A) => "twel +ve")' \ | perl -pe 's/0x\w+/X/g')

    But Concise shows a slight difference:

    m=Concise diff <(perl -MO=$m -e 'use constant A => 12; my %hash = ( A() => "twel +ve")') \ <(perl -MO=$m -e 'use constant A => 12; my %hash = ( (A) => "twel +ve")') 7c7 < 4 <$> const[IV 12] s*/FOLD ->5 --- > 4 <$> const[IV 12] sP*/FOLD ->5

    From the documentation it seems the P just means that A was parenthesized. I can imagine this information could be valuable to Perl (e.g. in the LHS of an assignment, but the structures of scalar versus list assignments are much more different); but probably not in this case.

    Update: Topics for meditation include other possible syntaxes (e.g. A ,=> 12), personal preferences, explanation of the Concise's output, etc.

    ($q=q:Sq=~/;[c](.)(.)/;chr(-||-|5+lengthSq)`"S|oS2"`map{chr |+ord }map{substrSq`S_+|`|}3E|-|`7**2-3:)=~y+S|`+$1,++print+eval$q,q,a,
Now released: Assert::Refute - A unified testing and assertion tool
1 direct reply — Read more / Contribute
by Dallaylaen
on Jan 02, 2018 at 04:55

    Hello dear esteemed monks,

    More than once I felt an urge to put a piece of a unit test script into production code to see what's actually happening there.

    Now there is excellent Test::More that provides a very terse, recognizable, and convenient language to build unit test. Unfortunately, it is not quite useful for production code. There are also multiple runtime assertion solutions, but they mostly focus on optimizing themselves out.

    My new module Assert::Refute is here to try and bridge the gap. The usage is as follows:

    use My::Module; use Assert::Refute qw(:all), {on_fail => 'carp'}; use Assert::Refute::T::Numeric; my $foo = My::Module->bloated_untestable_method; refute_these { like $foo->{bar}, qr/f?o?r?m?a?t/; can_ok $foo->{baz}, qw(do_this do_that frobnicate); is_between $foo->{price}, 10, 1000, "Price is reasonable"; };

    And this can be copied-and-pasted verbatim into a unit testing script. So why bother with runtime assertions? Possible reasons include:

    • Testing the method requires many preconditions/dependencies;
    • The method includes side effects and outside world interactions that are not easily replaced with mocks;
    • The method only misbehaves sporadically, doing what it should do most of the time, and the exact conditions required are not known;
    • The method needs to be refactored to be properly tested, and needs test coverage to be refactored.

    Main features include:

    • refute and subcontract calls allow to build arbitrarily complex checks from simple ones;
    • refute_these {...} block function to perform runtime assertions;
    • Prototyped functions mirroring those in Test::More to allow for moving checks between runtime and test scripts for optimal speed/accuracy tradeoff;
    • Object-oriented interface to allow for keeping the namespace clean;
    • Simple building and testing of custom checks that will run happily under Test::More as well as Assert::Refute;
    • Reasonably fast, at around 250-300K refutations/second on a 2.7GHz processor.

    This project continues some of my previous posts. Despite humble 0.07 version and documentation mentioning it's alpha, I think it is ready to be shown to the public now.

Programming Concepts
5 direct replies — Read more / Contribute
by aartist
on Dec 29, 2017 at 10:59
    Before a long time, I had taken a course, 'Programming Languages'. It discussed the various feature of programming languages and how it is implmented in a particular language. That could be the most interesting subject considering that I have spent few years in Programming. Unfortuanately much of that learning is lost in learning the syntax of the languages. Now, that I have programmed in various languages, I like to go to the basics again for my understanding.

    I like to know "what are the various programming concepts on which various languages have been designed", "Where do they differ" etc. So that, I can undertand or pickup any language based on concepts and pickup implementation of that concept later in my learning easily. I l like to see this in terms of Perl, Python, Java, Javascript and "your favorite language".

    Also now that, there are variety of framework for each language, I like to gain similar knowledge as on which concepts these framework are designed. Each frame work could have been designed based on different concepts.

    Any references would be useful.


Verifying your distribution's revdeps still work after a change
1 direct reply — Read more / Contribute
by stevieb
on Dec 28, 2017 at 15:48

    In one of my pieces of software, Mock::Sub, I recently found that I wanted to add a new feature, and as always, I wrote a test file for the new work before modifying code. After making the first round of changes, I stumbled upon a previously unknown bug, so I opened an issue, and decided to tackle that before adding the new feature, to ensure all previous tests would run.

    The new feature is quite minor and actually requires a specific parameter to be sent in to change existing behavour (ie. no existing code that uses the distribution should have been affected), but the bug was a little more complex, and did change things internally.

    After I got the bug and new feature added, and before just blindly uploading it to the CPAN, I of course wanted to know whether the reverse dependencies (down-river distributions) would not be adversely affected, which would cascade Testers failure emails to the poor souls who's distributions I broke (most are mine in this case, but I digress).

    A long time ago, I went about writing a completely autonomous testing platform to do extensive testing on my repos against all Perlbrew/Berrybrew installations installed (it can dispatch out to remote systems as well). This distribution is Test::BrewBuild One of the core features I built into this software, is to automatically perform unit tests on all reverse dependencies as they currently sit on the CPAN against the changed code in the local distribution.

    I'll get right to it; it's pretty straightforward:

    First, ensure you're in your repository directory, and ensure all tests pass on the distribution you've just updated:

    ~/devel/repos/mock-sub$ make test t/00-load.t .................... ok t/01-called.t .................. ok t/02-called_count.t ............ ok t/03-instantiate.t ............. ok t/04-return_value.t ............ ok t/05-side_effect.t ............. ok t/06-reset.t ................... ok t/07-name.t .................... ok t/08-called_with.t ............. ok t/09-void_context.t ............ ok t/10-unmock.t .................. ok t/11-state.t ................... ok t/12-mocked_subs.t ............. ok t/13-mocked_objects.t .......... ok t/14-core_subs.t ............... ok t/15-remock.t .................. ok t/16-non_exist_warn.t .......... ok t/17-no_warnings.t ............. ok t/18-bug_25-retval_override.t .. ok t/19-return_params.t ........... ok t/manifest.t ................... skipped: Author tests not required fo +r installation t/pod-coverage.t ............... skipped: Author tests not required fo +r installation t/pod.t ........................ skipped: Author tests not required fo +r installation All tests successful. Files=23, Tests=243, 1 wallclock secs ( 0.05 usr 0.02 sys + 0.72 cu +sr 0.07 csys = 0.86 CPU) Result: PASS

    So far, so good (of course, I had already ensured the "skipped" tests pass as well). Now, after installing Test::BrewBuild, and ensuring you've got at least one instance of Perlbrew/Berrybrew installed, simply run the brewbuild binary, with the -R or --revdep flag. In the example below, for brevity, I've limited the testing against only the version of Perl I'm currently using. If I had tested against more versions (or left off the -o or --on flag it tests against all installed versions by default), each version would be listed under each revdep with the PASS or FAIL status:

    ~/devel/repos/mock-sub$ brewbuild -R -o 5.24.1 reverse dependencies: App::RPi::EnvUI, RPi::DigiPot::MCP4XXXX, Devel::Examine::Subs, PSGI::Hector, File::Edit::Portable, Devel::Trace +::Subs App::RPi::EnvUI 5.24.1 :: PASS RPi::DigiPot::MCP4XXXX 5.24.1 :: PASS Devel::Examine::Subs 5.24.1 :: PASS PSGI::Hector 5.24.1 :: PASS File::Edit::Portable 5.24.1 :: PASS Devel::Trace::Subs 5.24.1 :: PASS

    That's all there is to it. Now I am confident that my changes will absolutely not break any of the down-river distributions that require this one.

    Note: If there had of been failures, a bblog directory will be created, and the full test output of that distribution located into its own file for easy review as to what went wrong. The file contains everything related to the test run that you'd normally see output by the cpanm command. Example:

    ~/devel/repos/mock-sub$ ll bblog drwx------ 2 steve steve 4096 Dec 28 10:53 . drwxrwxr-x 8 steve steve 4096 Dec 28 10:53 .. -rw-rw-r-- 1 steve steve 8128 Dec 28 10:26 App-RPi-EnvUI-5.24.1-FAIL.b +blog

    Note 2: If you want to get an understanding of most of the stuff brewbuild is doing, simply throw in a -d 7 to enable full debug logging to stdout.

    Update: Here's an example with multiple versions of Perl installed:

    App::RPi::EnvUI 5.24.1 :: PASS 5.18.4 :: FAIL 5.24.0 :: FAIL RPi::DigiPot::MCP4XXXX 5.18.4 :: PASS 5.24.0 :: PASS 5.24.1 :: PASS Devel::Examine::Subs 5.18.4 :: PASS 5.24.0 :: PASS 5.24.1 :: PASS PSGI::Hector 5.18.4 :: PASS 5.24.0 :: PASS 5.24.1 :: PASS File::Edit::Portable 5.18.4 :: PASS 5.24.0 :: PASS 5.24.1 :: PASS Devel::Trace::Subs 5.18.4 :: PASS 5.24.0 :: PASS 5.24.1 :: PASS
    ... and the resulting bblog directory entries:

    -rw-rw-r-- 1 steve steve 1085954 Dec 28 13:20 App-RPi-EnvUI-5.18.4-FAI +L.bblog -rw-rw-r-- 1 steve steve 1078097 Dec 28 13:20 App-RPi-EnvUI-5.24.0-FAI +L.bblog

    The output in the FAIL logs show that one of my dependencies for the distribution is behind a version on those two versions of Perl, so all I have to do is bump it in the Makefile.PL and re-run the tests. It has nothing to do with the Mock::Sub distribution at all, but did point out a different problem entirely solely with that dist. So I suppose that this tool is handy for warning about other issues outside of the current dist you're updating.

Read file text and find fibonacci series
3 direct replies — Read more / Contribute
by darkblackblue
on Dec 28, 2017 at 09:37

    Hi My goal is read only line from file , and find fibonacci series that has minimun 3 element.I want to control numbers from 1 to 6 digit. Fibonacci series starts 1,1,2,3,5,8,13,21,34,55,89, ...... .For example;File text document


    I division string 1 to 6 digit. Check 4 is fibonacci number , no , go ahaed 49 is fibonacci number , no, next 496 is fibonacci ,no, 4969 after 44693 ,after 496934 no.There isnt any fibonacci , go to next digit and do it again 9 , 96,969,9693,96934.

    use 5.010; use strict; use warnings; open(FILE, "<:encoding(UTF-8)", "aa.txt") or die "Could not open file: + $!"; my $numbers; while (<FILE>) { $numbers="$_" ; print "$_"; } chomp $numbers; print "\n$numbers"; my $len=length($numbers); print "\n$len\n"; my $i; my $abc; foreach my $i (0..$len){ foreach my $j (1..6) { print "$j----->"; $abc = substr($numbers,$i,$j); print "$abc\n"; } print "***********************************\n"; } close FILE;

    I read to find fibonacci series with PERFECT SQUARE. I didn't use it. C code example

    // C++ program to check if x is a perfect square #include <iostream> #include <math.h> using namespace std; // A utility function that returns true if x is perfect square bool isPerfectSquare(int x) { int s = sqrt(x); return (s*s == x); } // Returns true if n is a Fibinacci Number, else false bool isFibonacci(int n) { // n is Fibinacci if one of 5*n*n + 4 or 5*n*n - 4 or both // is a perferct square return isPerfectSquare(5*n*n + 4) || isPerfectSquare(5*n*n - 4); } // A utility function to test above functions int main() { for (int i = 1; i <= 10; i++) isFibonacci(i)? cout << i << " is a Fibonacci Number \n": cout << i << " is a not Fibonacci Number \n" ; + return 0; }

    2017-12-29 Athanasius restored original content

Efforts to modernize CPAN interface?
8 direct replies — Read more / Contribute
by nysus
on Dec 19, 2017 at 15:46

    CPAN works flawlessly, and as every PerlMonk knows, it is one of the Seven Programming Wonders of the World. But I'm curious to know if there have been any efforts to overhaul and modernize the CPAN interface, particularly PAUSE and the bug reporting site at Is anything in the works? Has this been a topic of discussion amongst Larry Wall or any of the other lesser Perl Gods who oversee it?

    Personally, I would love to see these great resources brought up to date with a more modern interface. I know, I get it, these sites probably work without javascript and on every browser since Mosaic. But there must be a way to keep both the purists and developers who are into the superficialities happy.

    I'm curious to know what the seasoned Monks think. Does CPAN need a facelift?

    Downvotes and nasty ad homs welcomed and encouraged for entertainment purposes. More thoughtful comments are welcome as well.

    $PM = "Perl Monk's";
    $MCF = "Most Clueless Friar Abbot Bishop Pontiff Deacon Curate Priest";
    $nysus = $PM . ' ' . $MCF;
    Click here if you love Perl Monks

Add your Meditation
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":

  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.
  • Log In?

    What's my password?
    Create A New User
    and all is quiet...

    How do I use this? | Other CB clients
    Other Users?
    Others studying the Monastery: (10)
    As of 2018-05-22 08:58 GMT
    Find Nodes?
      Voting Booth?