If you've discovered something amazing about Perl that you just need to share with everyone, this is the right place.

This section is also used for non-question discussions about Perl, and for any discussions that are not specifically programming related. For example, if you want to share or discuss opinions on hacker culture, the job market, or Perl 6 development, this is the place. (Note, however, that discussions about the PerlMonks web site belong in PerlMonks Discussion.)

Meditations is sometimes used as a sounding-board — a place to post initial drafts of perl tutorials, code modules, book reviews, articles, quizzes, etc. — so that the author can benefit from the collective insight of the monks before publishing the finished item to its proper place (be it Tutorials, Cool Uses for Perl, Reviews, or whatever). If you do this, it is generally considered appropriate to prefix your node title with "RFC:" (for "request for comments").

User Meditations
The problem of documenting complex modules.
3 direct replies — Read more / Contribute
by BrowserUk
on Jul 05, 2015 at 04:41

    This is meditation; but I also hope that it might start a discussion that will come up with (an) answers to what I see as an ongoing and prevalent problem.

    This has been triggered at this time by my experience of trying to wrap my brain around a particular complex module; but I don't want to get into discussion particular to that module, so I won't be naming it.

    Suffice to say that CPAN is replete with modules that are technically brilliant and very powerful solutions to the problems they address; and that deserve far wider usage than they get.

    In many cases the problem is not that they lack documentation -- often quite the opposite -- but more that they don't have a simple in; a clearly defined and obvious starting point that gives a universal starting point on which the new user can build.

    And example of (IMO) good documentation is Parallel::ForkManager. It's synopsis (I've tweaked it slightly to remove a piece of unnecessary fluff):

    use Parallel::ForkManager; my $pm = Parallel::ForkManager->new($MAX_PROCESSES); foreach my $data (@all_data) { # Forks and returns the pid for the child: my $pid = $pm->start and next; ... do some work with $data in the child process ... $pm->finish; # Terminates the child process }

    is sufficient to allow almost anyone needing to use it, for almost any purpose, to put together a reasonable working prototype in a dozen lines of code without reading further into the documentation.

    It allows the programmer to get started and move forward almost immediately on solving his problem -- which isn't "How to use P::FM" -- and only refer back to and utilise the more sophisticated elements of P::FM, as and when he encounters the limitations of that simple starting point.

    As such, the module is successful in hiding the nitty-gritty details of using fork correctly; whilst imposing the minimum of either up-front learning curve or infrastructural boiler-plate upon the programmer; who has other more important (to him) things on his mind.

    Contrast that with something like POE which requires a month of reading through the synopsis of the 800+ modules in that namespace POE::*, and then another month of planning, before the new user could put together his first line of code. As powerful as that module, suite of modules; dynasty of modules is, unless you have the author's help, and lots of time, getting started is an extremely daunting process. In that respect (alone perhaps), POE fails to enable a 'simple in'.

    And before anyone says that it is unfair to compare those two modules -- which maybe true -- the purpose was to pick extremes to make a point; not to promote or denigrate either.

    Another module that I know I should have made much more use of in the type of code I frequently find myself writing, is PDL. I've tried at least a dozen times to use PDL as a part of one of my programs; and (almost) every time I've abandoned the attempt before ever writing a single line of PDL, because I get frustrated by the total lack of a clear entry point in to the surfeit of documentation.

    There's the FAQ, and the Core; and the Index; and the QuickStart; and the Doc; and the Basic; and the Lite; and the Course; and the Philosophy; and the pdldoc; and the Tips; and ... I'm outta here. I'm trying to write my program, which does a little math on some biggish datasets that would benefit from being vectored, but life's too short...

    Again; the underlying code is brilliant (I am assured), and it isn't a case of a lack of documentation; just a mindset that says: "this is PDL in all its glory, power and nuance. bathe yourself in its wonderfulness and wallow in its depth". Oh, and then when you've immersed yourself in its glory, understood its philosophy, and acclimated its nuance, then you can get back to working out how to use it to solve your problem.

    And that's a real shame; and a waste.

    I'm not sure what the solution is. I do know that the modules I use most List::Util, Data::Dump, threads etc. I have rarely ever had to look at the documentation; their functionality has (for me) become an almost invisible extension of Perl itself, and only the occasional (perhaps you forgot to load "sum"?) reminds me that they aren't.

    Of course, what they do is essentially pretty simple; but that in itself is a perhaps a clue.

    I do know that (for me) the single most important thing in encouraging my use of a module is being able to C&P the synopsis into my existing program, tweak the variable names, and have it do something useful immediately.

    In part, that comes down to a well designed API; in part, to well-chosen defaults; and part having a well-chosen, well-written synopsis that addresses the common case; with variable names and structure that make it obvious how to adapt that synopsis for the common case. Once I have something that compiles and runs -- even if it doesn't do exactly what I need it to do; or even what I thought it would do from first reading -- it gives me a starting point and something to build on. And that encourages me to persist. To read the documentation on a as-I-need-to basis to solve particular problems as I encounter them.

    Over a decade ago, I posted My number 1 tip for developers.; and this is the other side of that same philosophy. Start simple and build.

    And that I think has to be the correct approach to documenting complex modules. They need to:

    1. Offer a single, obvious, starting point. The in.
    2. That needs to be very light on history, philosophy, jargon, technical and social commentary and background. And choice.
    3. It needs to offer a single, simple, well-chosen, starting point, that requires minimal reading to adapt to the users code, for the common case.
    4. It then needs to offer them a quick, clear, simple path to solving their problem.

    What it must not do:

    • It mustn't present them with 'a bloody great big list of entrypoints/methods'.
    • It mustn't offer them a myriad of choices and configuration options.
    • It mustn't take them on a deep immersion in the details of either algorithms or implementation.
    • It mustn't present them with either "Ain't this amazing" nor "Ain't I clever" advert.
    • It mustn't waste their time with details of your personal preferences, prejudices, philosophies and theologies.

    If you want programmers to use your modules, you need to tell them what (the minimum) they *NEED TO KNOW* to get started. And then give a clear index to the variations, configurations and extensions to that basic starting point.

    Achieve that, give them their 'in', with the minimum of words, fuss or choice, and they'll come back for all the rest as they need it.


    This is ill-thought through and incomplete, so what (beyond risking offending half the authors on CPAN) am I trying to achieve with this meditation?

    I'd like to hear if you agree with me? Or how you differ. What you look for in module documentation. Examples that you find particularly good; or bad.

    It'd be nice to be able to derive from the thread, a set of consensus guidelines to documenting moderate to complex modules -- that almost certainly won't happen -- but if we managed to get a good cross section of opinions on what makes for good and bad documentation; and a variety of opinions of the right way to go about it; it might provide a starting point for people needing to do this in the future.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
    I'm with torvalds on this Agile (and TDD) debunked I told'em LLVM was the way to go. But did they listen!
Writing multiple Excel::Writer::XLSX worksheets in parallel (3rd and final attempts)
3 direct replies — Read more / Contribute
by marioroy
on Jul 04, 2015 at 23:16

    My 1st and 2nd attempts got me warmed up and thought faster is possible. The following demo is my 3rd attempt and writes 1 million cells combined in less than 6 seconds from start to finish and 57 seconds for 10 million cells. Running serially takes 15 and 141 seconds for 1 and 10 million cells respectively. Processors have turbo boost for some time. Thus, serial code is likely to run at a faster GHz.

    for ( 1 .. 111_111 ) { ... } # 3 * 3, 1 million for ( 1 .. 1_111_111 ) { ... } # 3 * 3, 10 million

    Writing text data will slow this down a little due to obtaining the next unique id from the shared strTable object. The internal str_table is shared between worksheets in Excel::Writer::XLSX. Thus, synchronization is necessary as well. The flipflop mode in MCE::Shared provides both sharing and automatic synchronization all in one.

    Note: This requires MCE from trunk r957 or later which includes MCE::Shared as MCE 1.700 is not yet released. The logic consumes only the memory necessary. There is never duplicate data from running multiple workers.

    #!/usr/bin/env perl use strict; use warnings; # --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- package StrTable; sub new { my ($class, $self) = ( shift, { table => {}, unique => 0 } ); bless $self, $class; } sub table { $_[0]->{table } } sub unique { $_[0]->{unique} } sub value { if (exists $_[0]->{table}->{ $_[1] }) { $_[0]->{table}->{ $_[1] }; } else { $_[0]->{table}->{ $_[1] } = $_[0]->{unique}++; } } # --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- package main; use Archive::Zip (); use File::Copy qw(move); use File::Find (); use File::Temp (); $File::Temp::KEEP_ALL = 1; use Excel::Writer::XLSX; use MCE::Signal qw($tmp_dir); use MCE::Loop 1.699; use MCE::Shared; my $nodeList = [ [ 'AMS' , 'a' ], [ 'APJ' , 'ap' ], [ 'EMEA', 'e' ] ]; my $strTable = mce_share( { flipflop => 1 }, new StrTable ); my ($center, $format); { # Override _get_shared_string_index to synchronize str_table update +s no warnings 'redefine'; sub Excel::Writer::XLSX::Worksheet::_get_shared_string_index { my ($self, $str) = (shift, shift); if ( not exists ${ $self->{_str_cache} }->{$str} ) { ${ $self->{_str_cache} }->{$str} = $strTable->value($str); } else { ${ $self->{_str_cache} }->{$str}; } } } sub init_wb { my ($wn, $file) = (shift, shift); # Increment $wn by 1 since worksheet xml files begin at 1 $wn++; mkdir "$tmp_dir/$wn"; my $wb = Excel::Writer::XLSX->new($file || "$tmp_dir/$wn/tmp.xlsx") +; $wb->set_tempdir("$tmp_dir/$wn"); # Set workbook properties $wb->set_properties( title => 'Node List', author => 'L_WC demo', comments => 'Node List', ); # Define/add formats to the workbook $center = $wb->add_format(align => 'center'); $format = $wb->add_format(align => 'center', bg_color => 44); # Add worksheets, specify formats for columns/rows for (0 .. @{ $nodeList } - 1) { $wb->add_worksheet($nodeList->[$_][0]); $wb->sheets($_)->set_column(0, 4, 15, $center); } return $wb; } sub close_wb { my $wb = shift; MCE->sync(); # Wait for others to complete, important $wb->{_str_table } = $strTable->table(); # Replace str_table $wb->{_str_total } = 0+$strTable->unique(); # Update str_total $wb->{_str_unique} = 0+$strTable->unique(); # Update str_unique $wb->close(); # Close workbook } sub merge_wb_data { my $wb_file = shift; my ($zip, @pths, @xlsx_files) = (Archive::Zip->new()); local ($@, $!, $^E, $?); # Other files, e.g. table data likely need the same and not done # for this demonstration. Just worksheet files are merged. # I received help by reading _store_workbook inside # Excel::Writer::XLSX::Workbook.pm. # Find worksheet files 2,3,... for my $_num (1 .. @{ $nodeList }) { my $wanted = sub { push @pths, $1 if $File::Find::name =~ /(.*)\/sheet$_num\.xml +/; }; File::Find::find({ wanted => $wanted, untaint => 1, untaint_pattern => qr|^(.+)$ +| }, "$tmp_dir/$_num"); } # Move worksheet files 2,3,... to where worksheet 1 data resides for (0 .. @pths - 1) { unlink $pths[$_]."/../../../tmp.xlsx"; if ($_ > 0) { my $_num = $_ + 1; unlink $pths[0]."/sheet$_num.xml"; move $pths[$_]."/sheet$_num.xml", $pths[0]."/sheet$_num.xml"; } } # Re-zip xlsx files my $wanted = sub { push @xlsx_files, $File::Find::name if -f }; my $temp_dir = $pths[0]."/../../"; my $short_name; File::Find::find({ wanted => $wanted, untaint => 1, untaint_pattern => qr|^(.+)$| }, $temp_dir); for my $file_name (@xlsx_files) { $short_name = $file_name; $short_name =~ s{^\Q$temp_dir\E/?}{}; $zip->addFile($file_name, $short_name); } open my $fh, '>', $wb_file or die "Error opening xlsx file: $!\n"; binmode $fh; $zip->writeToFileHandle($fh); close $fh; } # --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- MCE::Loop::init( max_workers => scalar(@{ $nodeList }), chunk_size => 1, posix_exit => 1, use_threads => 0, ); mce_loop { my ($region, $sql) = ($_->[0], $_->[1]); my ($wb, $ws); # Acquire data from the DB. Each worker must obtain a handle. # The DB logic is similar to running serially. Just the where # clause is likely unique for each region. # Fill worksheet rows/cells if ($region eq 'AMS') { $wb = init_wb(0); $ws = $wb->sheets(0); $ws->write(0, 2, 'foo', $format); for ( 1 .. 111_111 ) { $ws->write(0 + $_, 0, 1000 + $_); $ws->write(1 + $_, 2, 2000 + $_); $ws->write(2 + $_, 4, 3000 + $_); } print "AMS ---- DONE.\n"; } elsif ($region eq 'APJ') { $wb = init_wb(1); $ws = $wb->sheets(1); $ws->write(0, 2, 'bar', $format); for ( 1 .. 111_111 ) { $ws->write(0 + $_, 0, 4000 + $_); $ws->write(1 + $_, 2, 5000 + $_); $ws->write(2 + $_, 4, 6000 + $_); } print "APJ ---- DONE.\n"; } elsif ($region eq 'EMEA') { $wb = init_wb(2); $ws = $wb->sheets(2); $ws->write(0, 2, 'baz', $format); for ( 1 .. 111_111 ) { $ws->write(0 + $_, 0, 7000 + $_); $ws->write(1 + $_, 2, 8000 + $_); $ws->write(2 + $_, 4, 9000 + $_); } print "EMEA ---- DONE.\n"; } close_wb($wb) if $wb; } $nodeList; # Shutdown MCE MCE::Loop::finish(); # Merge data into one workbook merge_wb_data('Node_List.xlsx'); print "Node List is Done.\n";

    Kind regards, Mario

RFC: Net::SNTP::Client v1
5 direct replies — Read more / Contribute
by thanos1983
on Jun 30, 2015 at 12:45

    Hello Everyone,

    About a year ago I started with the idea of creating a Perl module based on Net::NTP. The module that I am thinking to create would be named (Net::SNTP::Client). The difference between those two is the precision, from my point of view the Net::NTP module does not get correct millisecond/nanosecond precision. The module is based on RFC4330, where according to the RFC different precision will achieved on LinuxOS and WindowsOS.

    In theory the module should be compatible with all OS (WindowsOS, LinuxOS and MacOS) please verify that with me since I only have LinuxOS.

    I am planning to create also another module Net::SNTP::Server which is approximately an SNTP server and when I say approximately is because I can not figure it out how to replicate the server side. But any way first thing first.

    Is it possible to take a look and assist me in possible improvements and comments. Since this is my first module I have no experience so maybe the module is not well written.

    The execution of the script is very simple, create a script e.g. client.pl and put the code bellow.

    client.pl

    I have inserted four options:

    -hostname => NTP Hostname or NTP IP -port => 123 Default or Users choice e.g. 5000 -RFC4330 => 1 -clearScreen => 1

    The first option is to get an RFC4330 printout way, and the second option is to clear the screen before the printout. I think both options will be useful on the printout of the script.

    I have chosen to paste the module in the folder path "/home/username/Desktop/SNTP_Module/Net/SNTP/Client.pl". Remember for testing purposes to change the path on client.pl accordingly on the location that you will place the module.

    Update 1: Removing (EXPORT_OK, EXPORT_TAGS, shebang line) based on toolic comments.

    Update 2: Removing $frac2bin unused sub.

    Update 3: Adding some checks on the input of getSNTPTime sub

    Update 4: Adding Plain Old Documentation format and updating code based on Monk::Thomas comments.

    Update 5: Updating code based on Monk::Thomas new comments.

    Update 6: Updating code, with new updated Plain Old Documentation.

    Net::SNTP::Client.pm

    Thank you for your time and effort reading and replying to my question/review.

    Seeking for Perl wisdom...on the process of learning...not there...yet!
Software Projects In Real Life: "I See Dead People"
9 direct replies — Read more / Contribute
by sundialsvc4
on Jun 23, 2015 at 08:44

    The title of this Meditation comes, of course, from the punch-line of a really bad movie with a classic O’Henry Ending:

    But in many ways, it also sums up my career.   (Polite pause as the twitter of laughter dies down.)   For most of the past 25 years or so, I’ve been involved in projects.   Generally, not ones that I had started.   Generally, not healthy ones.   Dead ones, or very nearly so.   My task was to try to “turn them around,” and I generally did.   Whether or not my attempts at resuscitation were actually long-term successful, this experience did teach me a lot of the reasons ... and they are human reasons ... why software projects so often go so badly wrong.   I’m not going to do any preaching here, although it may seem so.   I’m just relating some of my personal experiences in a mortuary project triage.   (FYI:   Teams-in-place were anywhere from one to fifteen people, most of whom had “split the coop.”)

    First of all, these projects typically started out with “a great deal of enthusiasm, but no real plan.”   The usual justification was that the project needed to “hurry up to market,” or that the stakeholders in the project “would know it when they saw it” and the managers of the project (if there were any ...) simply gave-up trying to ask them to make up their minds.

    And, of course, in several cases, those stakeholders were assured that they didn’t have to make up their minds.   “Self-directed teams,” the programmers purred self-confidently, “would produce a ‘potentially viable product(!)’ every two weeks!

    “SOP = SOTP.™”   Standard Operating Procedure = Seat Of The Pants.

    And yet, what happened ... what inevitably happened ... is that everything in the “software mechamism” turned out to be inextricably coupled to everything else.   As layers of code were piled on, and as changes pinged-and-ponged throughout all those layers, the whole thing fell down in a heap as the programmers sailed on to the next green pasture.

    Many software projects are actually the work of one Guy.   (Sorry, ladies ...)   That “one guy” might be surrounded by several other people, but this is simply an attempt to scale-up the only modus operandi that this One Guy actually knows:   himself.   The project “feels its way along” because that’s how he’s used to doing it.   (And, because he is a crackerjack programmer, is used to eventually succeeding producing something.)   There simply isn’t any experience in being part of a successfully managed project:   most programmers, I candidly suspect, haven’t actually seen one.   (And there were no Angelic Choirs that started singing when I showed up either, I’m afraid ... no self-sunshine here.)

    The underlying reason for these problems, I think, is:   a very natural human reaction to what is a virtually-unmanageable technical situation.   The objective of the project is to build a self-directing machine ... and to do it perfectly, because nothing less than perfection will do.   Viewed as a mechanism, software would be said to have “unlimited degrees-of-freedom.”   i.e. “Anything is connected to everything else.”   Although the instant-to-instant flow of control within the software is of course described by if/then/else and looping constructs, the actual mechanism is also determined by its internal and external state.   This concern for “state” is what causes the coupling.   (And it’s also one of the reasons why “Functional Programming” is such a hot research topic.)

    My biggest criticism of Scrum, and Agile, and XP, and, well, most “methodologies,” is that they ignore this aspect.   They focus, instead, upon the organization and the daily work-activities of the team.   They discuss things like “user stories,” which are simply one possible way of trying to express one’s ideas and plans to a customer, but then omit from consideration exactly how that “story” is to become if/then/else, and how that new web of decision-logic is to be tested, and how it both affects and is affected by (“is infinitely coupled to ...”) everything else.   As a paradigm, useful in one sense though it may be, it does not and probably cannot (IMHO) go far enough.

    “We are building a self-directing machine.”   That, quite frankly, is the light-bulb moment that I got from the Managing the Mechanism e-book.   It’s something that we can say to business stakeholders, except that it is extremely likely to scare them off.   It certainly does, I think, cast some useful insights on what we might be missing in our present-day methodologies.   We certainly do need better processes for our work, better ways to describe them, and better ways to inform stakeholders of exactly what we need from them and why.

    In closing, one of the most prevalent things that I have seen, in every project that I have tried to turn-around, is disillusionment.   On both sides of the aisle.   Long before the software had broken down, communication had also broken down, and so had business process (if it ever truly existed).   No one builds houses and bridges that way.   (For very obvious, flammable and heavy reasons, no one is allowed to ...)   I suspect that the seeds of project failure are sown almost as soon as the first plow-blade cuts the soil.   This is our problem, as a profession, and we need a better solution to it.   Perhaps a different viewpoint is a start.

    That’s my Meditation.   Borne, as I said, from a most-interesting career path that has not always been a happy one.   What do you think?   What have your experiences been?   For instance, have you worked-through a spectacular success story from one of these other strategies?   I’d love to hear it . . .   The water in the cooler is ice-cold and there’s beer in the fridge that’s even colder.   May the discussions begin?

Nobody Expects the Agile Imposition (Part IX): Culture
3 direct replies — Read more / Contribute
by eyepopslikeamosquito
on Jun 20, 2015 at 03:40

    In 2008, we were pretty much a Scrum company ... however, a few years later, we had grown into a bunch of teams and we found that some of the standard Scrum practices were actually getting in the way. Rules are a good start, then break them when needed. We decided that Agile matters more than Scrum.

    Autonomy means the squad decides what to build, how to build it, and how to work together while doing it. One consequence of autonomy is that we have very little standardization. When people ask things like which code editor do you use or how do you plan, the answer is mostly depends on which squad. Some do Scrum sprints, others do Kanban ... it's really up to each squad. Instead of formal standards we have a strong culture of cross-pollination; when enough squads use a specific practice or tool, such as git, that becomes the path of least resistance, and other squads tend to pick the same tool.

    Why is autonomy so important? Well, because it's motivating and motivated people build better stuff.

    -- from Spotify Engineering Culture (Part 1) by Henrik Kniberg (0:30-4:40)

    Instead of blindly following Scrum dogma, I advise you to analyse the problems you and your company face daily. Reason about them. Consider applying Agile and Lean principles to them. Experiment to see what works for you and what doesn't.

    After feeling isolated and alone in resisting the Scrum imposition, reading Spotify's story has cheered me up. I've become more hopeful that things will change, that over time more folks will come to see the benefits of greater team autonomy.

    Autonomy and Alignment

    It's kind of like a jazz band, although each musician is autonomous and plays his own instrument, they listen to each other and focus on the whole song together. That's how great music is created. So our goal is loosely coupled but tightly aligned squads.

    Down here is low alignment and low autonomy, a micromanagement culture, no high level purpose, just shut up and follow orders. High alignment and high autonomy means leaders focus on what problem to solve but let the teams figure out how to solve it. Alignment enables autonomy.

    -- from Spotify Engineering Culture (Part 1) by Henrik Kniberg (3:20-3:50)

    For high autonomy to work, you need high alignment. With low alignment, teams simply do whatever they want, with each team going off in a different direction.

    Specialists vs Generalists

    Each system is owned by one squad. But we have an internal open source model and our culture is more about sharing than owning. Suppose squad one here needs something done in system B and squad two knows that code best, they'll typically ask squad two to do it. However, if squad two doesn't have time, then squad one doesn't necessarily need to wait. Instead they're welcome to go ahead and edit the code themselves and then ask squad two to review the changes. So anyone can edit code but we have a culture of peer code review. This improves quality and spreads knowledge. Over time we've evolved design guidelines, code standards, and other things to reduce engineering friction, but only when badly needed.

    -- from Spotify Engineering Culture (Part 1) by Henrik Kniberg (5:10-6:00)

    The basic unit of development is the squad. Because each squad sticks with one mission and one part of the product for a long time, they can really become experts in that area. A tribe is a collection of squads that work in related areas. The chapter is your small family of people having similar skills and working within the same general competency area (e.g. QA or Web Development), within the same tribe. As a squad member, my chapter lead is my formal line manager, a servant leader, focusing on coaching and mentoring me as an engineer, so I can switch squads without getting a new manager. A guild is a more organic and wide-reaching "community of interest", a group of people that want to share knowledge, tools, code, and practices. Chapters are always local to a tribe, while a guild usually cuts across the whole organization. Some examples are: the web technology guild, the tester guild, the agile coach guild. Anyone can join or leave a guild at any time. Most organizational charts are an illusion, so our main focus is community rather than hierarchical structure.

    -- from Scaling Agile @ Spotify (with Tribes, Squads, Chapters and Guilds) (pdf) by Henrik Kniberg & Anders Ivarsson and Spotify Engineering Culture (Part 1) by Henrik Kniberg (7:30-8:50)

    Alistair Cockburn (one of the founding fathers of agile software development) visited Spotify and said "Nice - I've been looking for someone to implement this matrix format since 1992 :) so it is really welcome to see"

    -- from Scaling Agile @ Spotify (with Tribes, Squads, Chapters and Guilds) (pdf) by Henrik Kniberg & Anders Ivarsson

    In learning more about Spotify, I was also glad to learn how they deal with specialization. You see, this tricky topic has been a chronic nuisance for us, for a number of reasons.

    First, code quality. New code written by a generalist, and reviewed by a generalist, is a long term code quality nightmare. Can you imagine what Perl code written by a ten-year Java veteran with a few days of Perl experience looks like? I can because I've seen it. There have been times when I've opened up a Perl file and rolled on the floor laughing because it was clear that whoever wrote it had no understanding of Perl at all.

    Second, employee engagement. Though many programmers are happy to become generalists, a significant minority (including me) are deeply dissatisfied with that role. I derive much more job satisfaction from doing an expert job in something I deeply understand than from "cargo-culting" some code that seems to kinda work even though I lack a deep understanding of why. I find that dissatisfying. I do not want to write Java code that causes my Java expert colleague to roll on the floor laughing.

    Another problem is that system architecture can be compromised if nobody focuses on the integrity of the system as a whole. To mitigate this risk, Spotify have a "System Owner" role. All systems have a system owner, or a pair of system owners (one with a developer perspective and one with an operations perspective is common). They further have a chief architect role, someone who coordinates work on high-level architectural issues that cut across multiple systems.

    Healthy Culture Heals Broken Process

    Trust is more important than control. Agile at scale requires trust at scale. And that means no politics. It also means no fear. Fear doesn't just kill trust, it kills innovation because if failure gets punished people won't dare try new things.

    -- from Spotify Engineering Culture (Part 1) by Henrik Kniberg (12:40-13:00)

    We focus on motivation, community and trust rather than structure and control. Healthy Culture Heals Broken Process.

    -- from Spotify Engineering Culture (Part 2) by Henrik Kniberg (0:40-0:50, 12:00-12:30)

    I've seen first hand how a healthy organisational culture of good communication and continuous improvement can effectively solve process problems.

    I've also seen first hand how a culture of fear and an over-emphasis on control and structure can harm innovation.

    Lean Startup

    We aim to make mistakes faster than anyone else. Continuous improvement, driven from below and supported from above. Failure must be non-lethal and with a "limited blast radius". Lean startup principles: think it, build it, ship it, tweak it. The biggest risk is building the wrong thing. Release first to a small percentage of users, then use A/B testing, then gradually roll out to the rest of the world. Impact is more important than velocity. Innovation more important than predictability.

    -- from Spotify Engineering Culture (Part 2) by Henrik Kniberg (2:00-)

    These are some of the ideas used by Spotify to build and release product. Since there is enough in this installment already, I'll postpone a discussion of Lean startup and related ideas to the next episode.

    Other Articles in This Series

    External References

Nobody Expects the Agile Imposition (Part VIII): Software Craftsmanship
4 direct replies — Read more / Contribute
by eyepopslikeamosquito
on Jun 11, 2015 at 05:22

    Long long ago, at an Agile summit in Utah, a group of software industry veterans, frustrated by what they saw as overly heavyweight software processes, concocted the Manifesto for Agile Software Development:

    • Individuals and interactions over processes and tools
    • Working software over comprehensive documentation
    • Customer collaboration over contract negotiation
    • Responding to change over following a plan
    ... and so the "Agile Transformation era" began:

    The Agile Alliance formed, conferences were held, companies went mad wanting a bit of this, everyone was having standup meetings, burndown charts, product backlogs, Agile coaches all over the place, more Agile coaches than developers ... and Post-its, Post-its, Post-its everywhere. The more Post-its you have the more agile you are.

    We spent ten years talking about people, interactions, team building, eliminating waste ... and at some point agile took a detour ... process became more important than technical practices ... everyone went crazy at the Post-it party, three years later they woke up and realised that every two weeks we see the pile of s#?! getting bigger.

    -- from Software Craftsmanship talk by Sandro Mancuso (2:00-5:00)

    Uncle Bob proposed adding the assertion that we value "Craftsmanship over crap" to the manifesto

    -- Uncle Bob, craftsmen and the Agile Manifesto

    Around 2008, some of the original Agile Manifesto-istas, led by Robert C Martin (aka "Uncle Bob"), felt that Agile had gone off the rails, with too much emphasis on process rather than technical practices and code quality. This group felt that Agile projects, every two weeks, relentlessly, steadily, iteratively, were producing more and more crap code. Adding to technical debt. Slowing us down. Though Agile gave good feedback on many things, code quality was not one of them; it is not visible on burndown charts or the Scrum board.

    How does this happen? Well, there seems to be an unwritten law of the Daily Standup that every morning you have to move at least one Post-it note. To comply with this "law", I sometimes pick up the Post-it note I was working on yesterday and move it a few millimeters to the right on the Scrum board while explaining why it is not really done yet. I always feel bad when I do this though; I would much rather proudly proclaim that it is done, so as to publicly show-off how productive I am.

    Sadly, I've occasionally noticed folks succumb to this psychological pressure to proclaim something done when it is not ... resulting in an every increasing pile of poo, as illustrated by Sandro's next (pair programming) anecdote:

    "but what about these names, they don't make sense ... and there's lots of duplication ... and ..." yeah, well as it's almost done now let's just check it in and add that to the technical debt backlog. This was a brand new feature in an agile team! The guy (Sandro was pairing with) was writing brand new code already thinking to add stuff to the technical debt backlog!

    -- from Software Craftsmanship talk by Sandro Mancuso (5:30-6:20)

    How to fix this lamentable state of affairs? The new Software Craftsmanship group chose to add four new Software Craftsmanship principles to the original Agile manifesto, creating a Manifesto for Software Craftsmanship:

    • Not only working software but also well-crafted software. Code we can refactor confidently and without fear.
    • Not only responding to change but also steadily adding value. Not just bug-fixing, but improving code structure, relentlessly keeping it clean.
    • Not only individuals and interactions, but also a community of professionals. Mentoring, sharing, user groups, passion, professionalism.
    • Not only customer collaboration, but also productive partnerships. Not a "manager giving orders to a developer" relationship, rather a partnership of equals built on trust.

    Well-crafted Software

    God forbid that we have one day someone who is a Software Craftsmanship coach ... it is about lead by example, being a mentor ... not about beautiful code, just trying to provide value, not writing crap code for your customers. The lack of craftsmanship can be one of the main causes of failing projects.

    -- from Software Craftsmanship talk by Sandro Mancuso (30:00-31:00)

    Like Sandro, I disagree the view that "Software craftsmen just care about beautiful code". Today, for example, it took me quite a while to figure out why some code that "should" fail was in fact reporting "success". I almost fell off my chair when I finally realized that it was silently catching and throwing away all exceptions! No comment explaining why it would do such a bizarre thing. To me, this sort of sloppiness shows a basic lack of respect for your colleagues; lack of care and comments wastes their time. It is unprofessional. Nothing to do with beautiful code.

    Steadily Adding Value

    After five years, the software is so s#?! that in the beginning it took two days to add a feature a feature of the same size now takes two months ... so they write a brand new one that is as s#?! as the previous one that will also be decommissioned in five years time.

    -- from Software Craftsmanship talk by Sandro Mancuso (17:00-18:00)

    Now the two teams are in a race. The tiger team must build a new system that does everything that the old system does. Not only that, they have to keep up with the changes that are continuously being made to the old system. Management will not replace the old system until the new system can do everything that the old system does. This race can go on for a very long time. I've seen it take 10 years. And by the time it's done, the original members of the tiger team are long gone, and the current members are demanding that the new system be redesigned because it's such a mess.

    -- Robert C Martin in Clean Code (p.5)

    The only way I can see to avoid this sort of fiasco is to to have the discipline to always keep the code clean in the first place, relentlessly refactoring as required every time you add a new feature.

    Productive Partnerships

    As soon as the button was pressed to mute NASA from our meeting, the managers said "we have to make a management decision", said Boisjoly.

    The general manager of Thiokol turned to his three senior managers and asked what they wanted to do. Two agreed to go to a launch decision, one refused. So he (the general manager) turns to him and said "take off your engineering hat and put on your management hat" -- and thatís exactly what happened, said Boisjoly. He changed his hat and changed his vote, just thirty minutes after he was the one to give the recommendation not to launch. I didnít agree with one single statement made on the recommendations given by the managers.

    The teleconference resumed and NASA heard that Thiokol had changed their mind and gave a recommendation to launch. NASA did not ask why.

    I went home, opened the door and didnít say a word to my wife, added Boisjoly. She asked me what was wrong and I told her "oh nothing hunny, it was a great day, we just had a meeting to go launch tomorrow and kill the astronauts, but outside of that it was a great day".

    -- from Remembering the mistakes of Challenger

    As indicated by the appalling "management decision" to launch the Challenger Space Shuttle, the lack of a true partnership, built on trust, between technical folks and management can have truly tragic consequences.

    Under pressure we cut corners ... No one wakes up in the morning and says "today I'm going to screw up, today I am going to write the worst code I can possibly write, I'm going to f#?! up with this company" ... I met some people who did that, but normal people don't do that ... everyone is trying to do a good job.

    The business asks how long is it going to take? ... The pressure we keep talking about, the pressure we put on ourselves ... we think we don't have time, even when we have the power to go to the business and tell them how long something is going to take!

    -- from Software Craftsmanship talk by Sandro Mancuso (7:00-9:00)

    A Community of Professionals

    Researchers (Bloom (1985), Bryan & Harter (1899), Hayes (1989), Simmon & Chase (1973)) have shown it takes about ten years to develop expertise in any of a wide variety of areas, including chess playing, music composition, telegraph operation, painting, piano playing, swimming, tennis, and research in neuropsychology and topology. The key is deliberative practice: not just doing it again and again, but challenging yourself with a task that is just beyond your current ability, trying it, analyzing your performance while and after doing it, and correcting any mistakes. Then repeat. And repeat again. There appear to be no real shortcuts: even Mozart, who was a musical prodigy at age 4, took 13 more years before he began to produce world-class music.

    -- from Peter Norvig's Teach Yourself Programming in Ten Years

    As indicated by Norvig above, it takes time and effort to become an expert, a true software craftsman. There appear to be no real shortcuts. With the current trend towards generalists in agile teams, finding the time to become a craftsman can be problematic as you find yourself constantly switching from one domain to another, from one language to another. This also affects code quality in that most code is being written by generalists, i.e. non-experts. This will be the topic of the next installment in this series.

    Other Articles in This Series

    External References

    Perl Monks References

The sieve of Xuedong Luo (Algorithm3) for generating prime numbers
5 direct replies — Read more / Contribute
by marioroy
on Jun 11, 2015 at 00:34

    Update June 26, 2015: New results using Math::Prime::Util v0.51 at the end of the post.

    Some time ago traveled to an imaginary place filled with prime numbers. In one of the paintings was the following statement. With k initialized to 1, the value alternates from 2 to 1 repeatedly. It was this statement which inspired me to give Algorithm3 a try.

    k = 3 - k;

    There are various prime modules on CPAN. Thus, have no plans to create another. All I wanted to do was to compare the Sieve of Eratosthenes with Algorithm3. I created a local copy of Math::Prime::FastSieve by davido and replaced the primes function.

    Sieve of Eratosthenes

    /* Sieve of Eratosthenes. Return a reference to an array containing a +ll * prime numbers less than or equal to search_to. Uses an optimized s +ieve * that requires one bit per odd from 0 .. n. Evens aren't represente +d in the * sieve. 2 is just handled as a special case. */ SV* primes( long search_to ) { AV* av = newAV(); if( search_to < 2 ) return newRV_noinc( (SV*) av ); // Return an empty list ref. av_push( av, newSVuv( 2UL ) ); // Allocate space for odd numbers (15 bits per 30 values) sieve_type primes( search_to/2 + 1, 0 ); // Sieve over the odd numbers for( sieve_size_t i = 3; i * i <= search_to; i+=2 ) if( ! primes[i/2] ) for( sieve_size_t k = i*i; k <= search_to; k += 2*i) primes[k/2] = 1; // Add each prime to the list ref for( sieve_size_t i = 3; i <= search_to; i += 2 ) if( ! primes[i/2] ) av_push( av, newSVuv( static_cast<unsigned long>( i ) ) ); return newRV_noinc( (SV*) av ); }

    Sieve of Xuedong Luo (Algorithm3)

    /* Sieve of Xuedong Luo (Algorithm3). Return a reference to an array * containing all prime numbers less than or equal to search_to. * * A practical sieve algorithm for finding prime numbers. * ACM Volume 32 Issue 3, March 1989, Pages 344-346 * http://dl.acm.org/citation.cfm?doid=62065.62072 * * Avoid all composites that have 2 or 3 as one of their prime factors * where i is odd. * * { 0, 5, 7, 11, 13, ... 3i + 2, 3(i + 1) + 1, ..., N } * 0, 1, 2, 3, 4, ... list indices (0 is not used) */ SV* primes( long search_to ) { AV* av = newAV(); if( search_to < 2 ) return newRV_noinc( (SV*) av ); // Return an empty list ref. sieve_size_t i, j, q = (sieve_size_t) sqrt((double) search_to) / 3 +; sieve_size_t M = (sieve_size_t) search_to / 3; sieve_size_t c = 0, k = 1, t = 2, ij; // Allocate space. Set bits to 1. Unset bit 0. sieve_type primes( M + 2, 1 ); primes[0] = 0; // Unset bits greater than search_to. if ( 3 * M + 2 > search_to + ((sieve_size_t)search_to & 1) ) primes[M] = 0; if ( 3 * (M + 1) + 1 > search_to + ((sieve_size_t)search_to & 1) ) primes[M + 1] = 0; // Clear composites. for ( i = 1; i <= q; i++ ) { k = 3 - k, c = 4 * k * i + c, j = c; ij = 2 * i * (3 - k) + 1, t = 4 * k + t; if ( primes[i] ) { while ( j <= M ) { primes[j] = 0; j += ij, ij = t - ij; } } } // Gather primes. if( search_to >= 2 ) av_push( av, newSVuv( 2UL ) ); if( search_to >= 3 ) av_push( av, newSVuv( 3UL ) ); for ( i = 1; i <= M; i += 2 ) { if ( primes[i] ) av_push( av, newSVuv(static_cast<unsigned long>(3 * i + 2)) +); if ( primes[i + 1] ) av_push( av, newSVuv(static_cast<unsigned long>(3 * (i + 1) ++ 1)) ); } return newRV_noinc( (SV*) av ); }

    Below is the time taken to find all prime numbers smaller than 1 billion.

    my $primes = primes( 1_000_000_000 ); Sieve of Eratosthenes 4.879 seconds Sieve of Xuedong Luo 3.751 seconds There are 50,847,534 prime numbers between 1 and 1 billion.

    I was so fascinated by Algorithm3 that I decided to parallelize it. It took me 5 weekends just to get the math to work. But I wanted faster and placed it aside. Two years later tried again and created Sandboxing with Perl + MCE + Inline::C. Also, tried Math::Prime::Util by Dana Jacobsen and primesieve by Kim Walisch.

    Testing was done on a Haswell Core i7 Macbook Pro running at 2.6 GHz configured with 1600 MHz memory.

    # # Count primes # perl algorithm3.pl 1_000_000_000 Prime numbers : 50847534 Compute time : 0.146 sec perl primesieve.pl 1_000_000_000 Prime numbers : 50847534 Compute time : 0.064 sec perl primeutil.pl 1_000_000_000 Prime numbers : 50847534 Compute time : 0.024 sec # # Sum primes ( 203_280_221 prime numbers ) # perl algorithm3.pl 4_294_967_296 --sum Sum of primes : 425649736193687430 Compute time : 1.082 sec perl primesieve.pl 4_294_967_296 --sum Sum of primes : 425649736193687430 Compute time : 0.369 sec perl primeutil.pl 4_294_967_296 --sum Sum of primes : 425649736193687430 Compute time : 2.207 sec # # Print primes ( 2.0 GB, beware... ) # perl algorithm3.pl 4_294_967_296 --print >/dev/null Compute time : 2.071 sec perl primesieve.pl 4_294_967_296 --print >/dev/null Compute time : 1.395 sec perl primeutil.pl 4_294_967_296 --print >/dev/null Compute time : 13.470 sec

    Fast is possible in Perl. Thus, Perl is fun. One is not likely to print that many prime numbers. Math::Prime::Util is powerful with many features. Algorithm3 was mainly an exercise exploring Perl + MCE + Inline::C possibilities.

    Update June 26, 2015 using Math::Prime::Util v0.51

    The mce-sandbox was updated to call the new sum_primes/print_primes functions in Math::Prime::Util v0.51 for the primeutil.pl example.

    Count primes $ perl algorithm3.pl 4294967296 Prime numbers : 203280221 Compute time : 0.623 sec $ perl primesieve.pl 4294967296 Prime numbers : 203280221 Compute time : 0.252 sec $ perl primeutil.pl 4294967296 Prime numbers : 203280221 Compute time : 0.210 sec Sum of primes $ perl algorithm3.pl 4294967296 --sum Sum of primes : 425649736193687430 Compute time : 1.090 sec $ perl primesieve.pl 4294967296 --sum Sum of primes : 425649736193687430 Compute time : 0.367 sec $ perl primeutil.pl 4294967296 --sum Sum of primes : 425649736193687430 Compute time : 0.768 sec Print primes ( outputs 2 GB containing 2032802221 prime numbers ) $ perl algorithm3.pl 4294967296 --print >/dev/null Compute time : 2.086 sec $ perl primesieve.pl 4294967296 --print >/dev/null Compute time : 1.397 sec $ perl primeutil.pl 4294967296 --print >/dev/null Compute time : 1.925 sec

    Kind regards, Mario

A "Fun"ctional Attempt
2 direct replies — Read more / Contribute
by withering
on Jun 04, 2015 at 04:14

    Perl is somewhat functional itself -- if the concept, "functional", which we're talking about, mainly means using functions as first-class objects instead of wiping almost all side effects out. However, lacking of sugar such as useful prototypes, pattern matching, or list comprehensions makes Perl less attractive in some functional situation (for fun, perhaps).

    There are a few attempts I made to achieve a better experience when programming perl for fun. For example:

    #!/usr/bin/env perl use HOI::Comprehensions; use HOI::Match; sub slowsort { HOI::Match::pmatch( 'nil' => sub { [] }, 'pivot :: unsorted' => sub { my $left = HOI::Comprehensions::comp( sub { $x }, x => $un +sorted )->( sub { $x <= $pivot } ); my $right = HOI::Comprehensions::comp( sub { $x }, x => $u +nsorted )->( sub { $x > $pivot } ); [ @{slowsort($left->force)}, $pivot, @{slowsort($right->fo +rce)} ] }, )->(@_) } my $res = slowsort [3, 4, 1, 2, 5, 6]; print @$res, "\n";

    where HOI::Match and HOI::Comprehensions are used to give a simple description of the whole computation. It should be noted that the code is not so strict since it assumes the existence of local variables. Scopes are dynamic, and bound variables are bound by names, as the ones in a typical lambda calculus theory.

    It is still not clear to me whether such sugar ideas are welcomed or not. Any suggestion or criticism is welcomed. I am looking forward for your replies.

Happy Monk day. (To me!)
12 direct replies — Read more / Contribute
by BrowserUk
on Jun 03, 2015 at 22:20

    I just realised that it's 13(*) years to the day since I posted this. We all start somewhere. I've been here almost every day since.

    *13. Is this the year it all goes wrong?


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
    In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked
Net::FTP fail workaround
1 direct reply — Read more / Contribute
by kurgan
on Jun 03, 2015 at 07:50

    I ran into an issue where Net::FTP was failing at random while connecting many times over the life of a script. I searched for a solution only to find that others were having similar issues. It seems that this excellent module does not actually check to see if the port that is used for connection is working correctly before forging on. The only way around it (that I have found) was to check for the error and reconnect on-the-fly until a working port has been found. It is not a perfect solution as it takes extra resources that could be used elsewhere if the error check was done at the module level, but it has worked well for me so far.

    I just thought others might find this of some help if they find themselves in the same situation. I was not really sure where to post this, so I asked on the CB and pointed here (thank you all for the help!). If you all find this helpful at all, please share it with others at your pleasure.

What's new in ECMAScript6, or: Oh no! Don't steal syntax from Perl!
4 direct replies — Read more / Contribute
by FreeBeerReekingMonk
on May 29, 2015 at 12:49

    Harmony... that's what the upcomming JS is called. It has some new syntax... and it's perl syntax... but does something totally different! So let me show you why I will be confused for the years to come:
    These 2 things are equivalent:

    people.map(function (person) { return person.age; })); people.map(person => person.age);
    Why? Why a fat arrow... why not the other way? <=

    Then there is let, which is similar to my but you seem to still be able to mix var in and create a scope mess...

    They also use modules, with named import's (just like in perl you can require certain things from a pm).
    And the templating... oh noes... the templating uses backtick/backquote...
    var name = 'John'; var age = 29; return `My name is ${ name }, in a year I will be ${ age + 1 } years o +ld`;

    Interested in reading more about JS6? Here is a link:
    ecmascript6-introduction

    Now leave me be... I'm drowning my pain in beer...

perl Data structure to XML
1 direct reply — Read more / Contribute
by Hosen1989
on May 27, 2015 at 07:50

    perl Data structure to XML

    Hi ALL.

    I like to share with you solution for issue I had face it and come with way to solve it.

    I was in need of some module that take any kind of hash or array or any combination of those two and convert them to xml file.

    will, yes there are other module that do this task but not as need it.

    so here my way to do this task:

    also please point to any error in my code or any other best way to do the same thing. ^_^

    use strict; use warnings; use XML::LibXML; my %TV = ( flintstones => { series => "flintstones", nights => [ "monday", "thursday", "friday" ], members => [ { name => "fred", role => "husband", age => 36, }, { name => "wilma", role => "wife", age => 31, }, { name => "pebbles", role => "kid", age => 4, }, ], }, jetsons => { series => "jetsons", nights => [ "wednesday", "saturday" ], members => [ { name => "george", role => "husband", age => 41, }, { name => "jane", role => "wife", age => 39, }, { name => "elroy", role => "kid", age => 9, }, ], }, simpsons => { series => "simpsons", nights => [ "monday" ], members => [ { name => "homer", role => "husband", age => 34, }, { name => "marge", role => "wife", age => 37, }, { name => "bart", role => "kid", age => 11, }, ], }, ); my $xmlString = HASH2XML(\%TV,'TV_SERIES'); print $xmlString; ###################################################################### +######################## sub HASH2XML{ my ($inHashRef) = $_[0]; my ($inName) = $_[1]; if(!defined $inName){$inName = 'rootNode';} my $doc = XML::LibXML::Document->new('1.0', 'utf-8'); # to + create XML doc my $rootNode = $doc->createElement("$inName"); # to create xml +root node $rootNode->setAttribute('Profile_id'=> "$inName"); # add some A +ttribute to the node _OBJ2XML_($doc,$rootNode,$inHashRef,$inName); $doc->setDocumentElement($rootNode); # print $doc->toString(); return $doc->toString(1); } sub _OBJ2XML_{ my ($doc) = $_[0]; my ($inNode) = $_[1]; my ($inRef) = $_[2]; my ($inName) = $_[3]; if(!defined $inName){$inName = 'Node';} if(ref($inRef) eq 'HASH'){ for my $key (sort keys %{$inRef}) { if(ref($inRef->{$key}) eq 'HASH'){ my $tag = $doc->createElement($key); # $tag->setAttribute('dataType'=> "HASH"); $inNode->appendChild($tag); _OBJ2XML_($doc,$tag,$inRef->{$key},$key); }else{ _OBJ2XML_($doc,$inNode,$inRef->{$key},$key); } } }elsif(ref($inRef) eq 'ARRAY'){ my $Len = @{$inRef}; for(my $i = 0; $i < $Len; $i++){ if(ref(@{$inRef}[$i]) eq 'HASH' or ref(@{$inRef}[$i]) eq ' +ARRAY'){ my $tag = $doc->createElement($inName); # $tag->setAttribute('dataType'=> "ARRAY"); $inNode->appendChild($tag); _OBJ2XML_($doc,$tag,@{$inRef}[$i],$inName); }else{ _OBJ2XML_($doc,$inNode,@{$inRef}[$i],$inName); } } }elsif(ref($inRef) eq 'CODE'){ print "--End to CODE ref @ LINE:".__LINE__."\n"; return 0; }elsif(ref($inRef) eq 'SCALAR'){ print "--End to SCALAR ref @ LINE:".__LINE__."\n"; return 0; }else{ my $tag = $doc->createElement($inName); # $tag->setAttribute('dataType'=> "data"); $tag->appendTextNode($inRef); $inNode->appendChild($tag); return 0; } return 0; }

    -----------------------------------------------------------------

    and below is the sample for the output:

    # ------------------------------------------# # the XML output after doing pretty print # # ------------------------------------------# <?xml version="1.0" encoding="utf-8"?> <TV_SERIES Profile_id="TV_SERIES"> <flintstones> <members> <age>36</age> <name>fred</name> <role>husband</role> </members> <members> <age>31</age> <name>wilma</name> <role>wife</role> </members> <members> <age>4</age> <name>pebbles</name> <role>kid</role> </members> <nights>monday</nights> <nights>thursday</nights> <nights>friday</nights> <series>flintstones</series> </flintstones> <jetsons> <members> <age>41</age> <name>george</name> <role>husband</role> </members> <members> <age>39</age> <name>jane</name> <role>wife</role> </members> <members> <age>9</age> <name>elroy</name> <role>kid</role> </members> <nights>wednesday</nights> <nights>saturday</nights> <series>jetsons</series> </jetsons> <simpsons> <members> <age>34</age> <name>homer</name> <role>husband</role> </members> <members> <age>37</age> <name>marge</name> <role>wife</role> </members> <members> <age>11</age> <name>bart</name> <role>kid</role> </members> <nights>monday</nights> <series>simpsons</series> </simpsons> </TV_SERIES>
Is CGI.pm dead?
5 direct replies — Read more / Contribute
by Anonymous Monk
on May 26, 2015 at 07:08

    Hello, I've found out that CGI.pm is no more in the core distribution of Perl. I've also read that there are many better ways to implement a web application, like use Plack, or use CGI::Application, CGI::Snapp, Dancer, Mojolicious... It is also suggested to use templating systems (ok fine, I've used them with CGI.pm).

    However, with modern responsive websites I think that CGI.pm is still a great module, so simple to use it that I don't see a reason to move away or use anything else.

    You just write an API for the client javascript making an AJAX request and you're all done with something like:

    #!/usr/bin/perl use strict; use CGI; use JSON; my $query = new CGI; my $response = whatever(Query=>$query); my $json = JSON->new->utf8(1)->pretty(1)->allow_nonref->encode($respon +se); print $query->header('application/json').$json; sub whatever { # Here you can do REALLY anything, and send back an hash reference }

    Do I make it too easy? This is has a flat learning curve too.

Perl monks vs other sites
3 direct replies — Read more / Contribute
by f77coder
on May 23, 2015 at 23:09
    Hello All,

    I wasn't sure where to post this, so apologies if this is not the place.

    I wanted to say how great Perl Monks is at helping out noobs compared with knuckle dragging neanderthals at place like stack overflow. People here are generally orders of magnitude nicer.

    Cudos to the site.

RFC: Swagger-codegen for Perl
2 direct replies — Read more / Contribute
by wing328
on May 15, 2015 at 02:17
    Hi all,

    https://github.com/swagger-api/swagger-codegen contains a template-driven engine to generate client code in different languages by parsing your Swagger Resource Declaration. Recently I've added the Perl template. To test the code generation, please perform the following (assuming you've the dependencies installed):

    git clone https://github.com/swagger-api/swagger-codegen.git git checkout develop_2.0 git checkout mvn clean && ./bin/perl-petstore.sh

    If you do not want to install the dependencies and just want to get a peek at the auto-generated Perl SDK, please go to the directory samples/client/petstore/perl to have a look at the Perl SDK for Petstore http://petstore.swagger.io (please make sure you're in the develop_2.0 branch)

    The Perl SDK is not perfect and I would appreciate your time to test and review, and share with me your feedback.

    (ideally I would like to post this at "Meditations" but I couldn't find a way to post there)

    Best,
    wing328
    http://github.com/wing328

Add your Meditation
Title:
Meditation:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":


  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.