There's more than one way to do things PerlMonks

### Meditations

 ( #480=superdoc: print w/ replies, xml ) Need Help??

If you've discovered something amazing about Perl that you just need to share with everyone, this is the right place.

This section is also used for non-question discussions about Perl, and for any discussions that are not specifically programming related. For example, if you want to share or discuss opinions on hacker culture, the job market, or Perl 6 development, this is the place. (Note, however, that discussions about the PerlMonks web site belong in PerlMonks Discussion.)

Meditations is sometimes used as a sounding-board — a place to post initial drafts of perl tutorials, code modules, book reviews, articles, quizzes, etc. — so that the author can benefit from the collective insight of the monks before publishing the finished item to its proper place (be it Tutorials, Cool Uses for Perl, Reviews, or whatever). If you do this, it is generally considered appropriate to prefix your node title with "RFC:" (for "request for comments").

User Meditations
by VinsWorldcom
on Aug 05, 2015 at 06:28

Windows Monks,

Do you use Notepad++ as your preferred editor? Have you done some things with NppExec to move towards an IDE? Do you want integrated debugger support?

I was out of luck on the last one and a myriad of Google-ing didn't help. There was a debugger plugin for Notepad++ (DBGp - http://sourceforge.net/projects/npp-plugins/files/DBGP%20Plugin/) but it was for PHP debugging with XDebug ... originally. I set out to get Perl working with it and lo and behold - I did it!

UPDATE: I'm on Windows 7 x64 and using Strawberry Perl 5.18.1 MSWin32-x64-multi-thread.

The gory details can be found here:

http://vinsworldcom.blogspot.com/2015/08/debugging-perl-debugger-part-2-variable.html
http://vinsworldcom.blogspot.com/2015/08/debugging-perl-debugger-part-3.html

Essentially, you need:

#### DBGp plugin

Get it from the SourceForge page at http://sourceforge.net/projects/npp-plugins/files/DBGP%20Plugin/. Get the latest (0.13 beta as of this writing) and you'll only need the DLL. Current file name is: DBGpPlugin_0_13b_dll.zip. Unzip the DLL to your Notepad++\plugins directory.

#### Perl Debugger for Komodo IDE

We only need this since there isn't a "Perl Debugger for Notepad++ IDE". It essentially supplies the interface via "perl5db.pl" and subdirectory "DB" of various supporting modules. You get it here: http://downloads.activestate.com/Komodo/releases/archive/4.x/4.4.1/remotedebugging/ and you'll want the Komodo-PerlRemoteDebugging-4.4.1-20896-win32-x86.zip file. NOTE: I tried newer releases, but found other issues cropped up in addition to the ones I show you how to fix below, so use this version, or the rest of this won't make much sense.

I created a directory in my Notepad++\plugins directory called "PerlDebug"; so, ...Notepad++\plugins\PerlDebug. I unzipped only the "perl5db.pl" file and the entire "DB" directory and its sub directories into that PerlDebug directory.

#### Getting it to Work

You need to make some edits to the Perl Debugger scripts from the Komodo IDE as well as set some environment variables. First, the edits.

##### Edit DB\DBgrProperties.pm

Open the Notepad++\plugins\PerlDebug\DB\DBgrProperties.pm file. Line 657-659 is something like:

    $res .= sprintf(qq(<value%s><![CDATA[%s]]></value>\n),$encoding ? qq( encoding="$encoding") : "",$encVal);
[download]

Change that to:

    $res .= sprintf(qq(<![CDATA[%s]]>\n),$encVal);
[download]

Also, further up on line 131, you'll see:

    context_id="%d"
[download]

Change that to:

    context="%d"
[download]
##### Edit perl5db.pl

Open the Notepad++\plugins\PerlDebug\perl5db.pl file. On line 1525:

    $res .= sprintf(' line="%s"', [download] change to: $res .= sprintf(' lineno="%s"',
[download]

And, after line 2898, which should read:

    my $bpInfo = getBreakpointInfoString($bkptID, function => $bFuncti +on); [download] add the following three lines:  if ($bpInfo) {
$res .=$bpInfo;
}
[download]
##### Environment Variables

You'll need some environment variables to get this to work. I wanted them to be volatile so as not to upset normal operations. This where I used NppExec. I'll assume you have it installed as it's an awesome plugin that you should have installed. If not, get if from the Plugin Manager.

The only essential environment variables to set are:

set PERL5LIB=C:\path\to\Notepad++\plugins\PerlDebug
set PERLDB_OPTS=RemotePort=127.0.0.1:9000
[download]

where "\path\to\Notepad++" is your directory path to Notepad++. I have mine at "C:\usr\bin\npp\plugins\PerlDebug". Yours may be "C:\Program Files\Notepad++\plugins\PerlDebug". Note if your path has a space (like between "Program" and "Files" in the example, you'll probably need to double-quote the entire path assigned to PERL5LIB like: set PERL5LIB="C:\Program Files\Notepad++\plugins\PerlDebug".

I set my variables with an NppExec script:

NPP_SAVE
cd "$(CURRENT_DIRECTORY)" NPP_MENUCOMMAND Plugins\DBGp\Debugger ENV_SET PERLDB_OPTS=RemotePort=127.0.0.1:9000 ENV_SET PERL5LIB=$(SYS.PERL5LIB);$(NPP_DIRECTORY)\plugins\PerlDebug INPUTBOX "Command Line Arguments: " cmd /c start "Perl Debug" cmd /c perl.exe -d "$(FILE_NAME)" $(INPUT) [download] I saved it as "Perl - Debug" and used NppExec to add it to my "Macro" menu in Notepad++. It saves the current file, changes to the working directory, enables the DBGp plugin, sets the environment variables (only temporarily within the Notepad++ context, and careful not to step on a current value that may be in PERL5LIB), prompts for any command-line input to pass to your script to get it to run and finally starts the Perl debugging session - integrated in Notepad++! Some options I used to tune the DBGp plugin; from the Notepad++ "Plugins" menu, select "DBGp" and then "Config...". • Ensure the top checkbox "Bypass all mapping (local windows setup) is checked • No configuration in the "Remote Server IP", "IDE KEY" ... window • Under the "Misc" section, check: • "Break at first line when debugging starts" • "Refresh local context on every step" • "Refresh global context on every step" Hope it works for you - happy debugging! Building the Right Thing (Part I): Pretotyping 7 direct replies — Read more / Contribute by eyepopslikeamosquito on Aug 04, 2015 at 07:56 The biggest waste in software development seems to be building the wrong product, or the wrong features -- from How to build the right thing by Henrik Kniberg There is nothing so useless as doing efficiently that which should not be done at all I'd originally planned yet another installment of the long-running Agile Imposition series, reporting on Lean startup and related ideas. As I began my research however, I soon realized this is a vast, complicated and perplexing topic; a topic so important it can make or break your business. So, to do it justice, I've decided instead to start a new series of articles on building the right thing. Innovators Trump Ideas Most new ideas fail, even if they are very well executed. -- from The Pretotyping Manifesto by Alberto Savoia At work we have a place where googlers submit their ideas; there are over 10,000 ideas. I call it the place where ideas go to die. -- from The Pretotyping Manifesto by Alberto Savoia If you have any doubt about the business value of ideas, try going to any venture capitalist and telling them: "I have a great idea that could be turned into a multi-billion$ business. I am not going to implement it, but if you give me a mere $10,000 I'll give you my idea and it's yours to do whatever you want with it." Just for fun, I created an ad peddling my services as an Ideator and posted it on Craigslist: "Ideator for hire.$10 per idea." I am still waiting for a serious reply.

-- from Innovators beat Ideas by Alberto Savoia

Leonard approaches them with an idea for a smartphone app that helps users solve Differential Equations and announces that nobody else is currently making an app like theirs. Because of Penny's presence, Sheldon is afraid Penny will steal Leonard's idea. He points out an "Unlikely, but very plausible scenario" that Penny befriends the gang to steal a marketable idea from them. Penny points out that she hangs out with them partly because she receives free food.

-- from The Bus Pants Utilization Big Bang Theory, Season 4, Episode 12

Sheldon's reaction notwithstanding, ideas themselves are of little value.

Though innovators trump ideas, backing an innovator -- even one with a successful track record -- is hardly a safe bet. Innovation is hard. Startups are risky. Indeed, the prime motivation of both Alberto Savoia (father of pretotyping) and Eric Ries (father of Lean startup) is that they both experienced both phenomenal success and catastrophic failure in different Startups -- and so became determined to figure out why.

Some Famous Product Failures

Many businesses disappear because the founder-entrepreneur insists that he or she knows better than the market

The Innovator's nightmare is spending years and millions to build and perfect a product or service that people don't need or want

-- from The Pretotyping Manifesto by Alberto Savoia

The throwaway merchants at Bic thought; I know we've been very successfully making disposable pens, lighters and razors, why not make disposable underwear for women?

Some examples of spectacular failures caused by building the wrong "it":

Many other examples could be given.

What is especially tragic is when huge investments are made up front, then -- when the product idea is clearly failing -- instead of calling it quits, still more cash is pumped in. Until bankruptcy ensues. How to avoid this sort of tragedy?

Pretotyping

IBM 30 years ago did something very clever. They thought that speech to text would be the next big thing because managers could not type. Their market research told them that if they built a speech to text translator, people would buy it. As a small experiment they got people who said "we will pay $10,000 if you build it" and brought them to IBM. They put them in a room with a microphone and a screen, so they thought they had a speech to text translator when in fact they had a super typist in a hidden room! It sounded like a good idea. However, after using it, at the end of the day my throat is sore; and I cannot dictate confidential memos in an open office. After the test, folks said "I'm sorry, I hope you didn't build too many of them, because we don't want them". The person who built the Palm Pilot, Jeff Hawkins, had an innovator's nightmare, lost millions. This time, instead of whipping my investors into a frenzy, this time let's test the idea with a little wood block and a tooth pick, and he went around for two weeks pretending he had built this Palm Pilot. After two weeks of this pretending, he said "You know, if this wasn't just a piece of wood I would actually use it". It had much less functionality than the Newton, yet was much more successful. -- from The Pretotyping Manifesto by Alberto Savoia Pretotyping: Validating the market appeal and actual usage of a potential new product by simulating its core experience with the smallest possible investment of time and money. -- from The Pretotyping Manifesto by Alberto Savoia The Pretotyping Manifesto: • innovators beat ideas • pretotypes beat productypes • data beats opinions • doing beats talking • simple beats complex • now beats later • commitment beats committees False Positives and False Negatives Remember Webvan, the originators of the idea of groceries ordered online, then delivered to your door? Conceived during the first internet boom of the late 1990's, the idea behind Webvan was an instant success in Thoughtland. Everyone gave it a thumbs-up, and why not? It sounded simple, convenient, it had that why-didn’t-I-think-of-that, forehead-smacking ring of genius. Who can ignore Twitter? But when you first heard of the service, what was your reaction? Some may have thought it an intriguing experiment in real-time micro-broadcasting (though what evidence there was that this was a gap for people is unclear to me). But surely few intuited that it would ultimately power the democratic revolutions of the Arab Spring. The elevator pitch for Twitter has that terrier-twisting-its-head-to-comprehend, temple-scratching ring of insanity. -- from Pretotyping@Work Invent Like a Startup, Invest Like a Grownup by Jeremy Clark With Webvan, people who had been asked a hypothetical "would you use it?" question turned out to be far less enthusiastic when faced with a concrete "will you use it?" question. By the way, seeking to learn from failure, Amazon has recently hired several of the original Webvan developers to launch a new Amazon Fresh grocery business. It seems that False Positives are usually based on the opinions of acknowledged (and over-confident) experts. We pretotype because data beats opinions. False Negatives, such as Twitter, are much rarer. Which leads us to Clark's second law of failure: too few crazy-sounding ideas get tried. To avoid both False Positive and False Negative outcomes, revealed-preference market testing of reasonable proxies for the final product have to be achievable at much lower investments of time and money. -- from Pretotyping@Work Invent Like a Startup, Invest Like a Grownup by Jeremy Clark Some Pretotyping Techniques • Fake Door. Advertise a new product or feature then track the response rate to see who would be interested. • Pinocchio. As used by Jeff Hawkins with his wooden model of the Palm Pilot. • Mechanical Turk. As used above by IBM to test customer reaction to speech-to-text translation "software". • One Night Stand. A fairly complete service experience is provided, minus the expensive underlying infrastructure required by a permanent solution. • Impersonator. A new wrapper is put on an existing product in order to impersonate a new one. • MVP. A Minimum Viable Product (MVP), a core part of Lean startup, is a working prototype put in the customer's hands. It is stripped down to the bare minimum required to perform a fair test. A complementary approach to pretotyping that has made a big splash recently is Lean startup, the subject of the next installment in this series. Perl Monks References External References Time for an application portfolio 8 direct replies — Read more / Contribute by talexb on Jul 27, 2015 at 12:04 I have been tinkering with a few tools lately, and now want to put up a portfolio of some web applications that I am working on. I have an account on pair Networks (they also host this site), so I set up local::lib and went ahead and tried to install Mojolicious::Lite, since that's the platform I'm working on these days. No dice -- Mojo::Lite requires 5.10, and pair only has 5.8.9. I checked with the other provider I use, and they have 5.8.8. So the two options I can see are a) install an up-to-date Perl on one of those accounts, or b) have these web applications run on my home machine (perhaps using http://www.easydns.com to provide consistent name resolution -- not sure is this is still available). I could go find another web provider, but that's additional expense, and not really my best option right now. Feedback welcome! Alex / talexb / Toronto Thanks PJ. We owe you so much. Groklaw -- RIP -- 2003 to 2013. RFC: newscript.pl , my very first script! 6 direct replies — Read more / Contribute by Darfoune on Jul 24, 2015 at 15:40 Hello monks, This is my very first post on this site! I wrote my first program, newscript.pl , a few times ago, in order to save some repetitive typing and I really want your opinions, critics, suggestions etc.. It's job is very simple, create an empty script. At first written in bash, with only bash as supported language, now written in Perl it includes Perl, Bash and I started slowly working to include C as well. It's not very portable and surely not very efficient, but I'm using it everyday, when reading thru intermediate Perl, advanced Bash scripting and learning C the hard way. The script first ask user for the name of the new script, then it asks for the language you're going to use. It will then print the shebang line as well as wanted modules (if you asked for a Perl script) and save it to a file withyourname.test. It then fires up emacs -nw with your newscript so you are ready to input code right away. #!/usr/bin/perl use strict; use warnings; use File::Basename; ############################ # # Name : newscript # Usage: Makes ready to use script templates for # Bash and Perl only at the moment. It includes # the shebang line for both. Perl templates # includes some useful pragmas and the option to # include the required modules on the command line. # ############################ # declare some required vars my ($name, $language); my @modules = (); my$fullname = $0; my$progname = basename($fullname); ## main program: print "Name of the new script : "; chomp ($name = <STDIN>);

print "Language of $name script: "; chomp ($language = <STDIN>);

# If laguage is Bash, make a Bash script
if ($language =~ /bash/i) {$name = "$name.test"; print "\nMaking a Bash script:$name\n";
_makebash();

# If language is Perl, make a Perl script
} elsif ($language =~ /perl/i) {$name = "$name.pl.test"; print "\nMaking a Perl script:$name\n";
print "\nAdd modules? ex: File::Basename;\n(use strict and use war
+ning are turned on by default).\n";
print "[yes/no]: ";
# check if user wants modules
chomp (my $addmodule = <STDIN>); if ($addmodule =~ /yes/i) {
print "\nThis script does NOT add a ';' for you!\nSay 'done' w
+hen you done..\nModules: ";
while (<STDIN>) {
last if ($_ =~ /done(;)?/i); push @modules,$_;
}
_makeperl();
} elsif ($addmodule =~ /no/i) { _makeperl();} else { print "I assume no.\n"; _makeperl(); } # If language is C, make a C program } elsif ($language =~ /c/i) {
$name = "$name.test.c";
print "\nMaking a C program: $name\n"; print "\nThis is the first version with C included, no more option +s yet.\n"; print "Only '#include <stdio.h>' added at this time.\n\n"; _makec(); } else { print "This might help you:\n"; _usage(); } # Make a bash script sub _makebash { if ($language eq 'bash') {
open (NEWSCRIPT, '>', $name); print NEWSCRIPT "#!/bin/bash\n\n"; close NEWSCRIPT; chmod 0700, "$name";
exec (emacs -nw +3 $name); } } # Make a perl script sub _makeperl { open (NEWSCRIPT, '>',$name);
print NEWSCRIPT "#!/usr/bin/perl\n\nuse warnings;\nuse strict;\n";
if (defined($modules[0])){ # if module is defined +, include them to the template for my$mods (@modules) {
print NEWSCRIPT "use $mods"; } } print NEWSCRIPT "\n"; close NEWSCRIPT; print "\n"; chmod 0700, "$name";
exec (emacs -nw +50 $name); } # Make a C program sub _makec { open (NEWPROG, '>',$name);
print NEWPROG "#include <stdio.h>\n\n";
close NEWPROG;
print "\n";
chmod 0700, "$name"; exec (emacs -nw +10$name);
}

# Sets the usage message.
sub _usage {
print<<EOF;

Usage: progname [no options yet] Creates ready to use script templates. The script will first ask you for the name of your program, then the language in which you want it written. If your chosen language is supported, it will make an empty script, with your name and 'test' appended to it. The script then makes an exec call to emacs -nw with your new file. (not very portable yet ..) note: If your chosen language is Perl, The script will ask you if you wish to import more modules. If you do want more input them then followed by a ';' and input 'done' when finish. bash: #/bin/bash perl: #/usr/bin/perl use warnings; use strict; use [yourmods]; C: #include <stdio.h> EOF } [download] IBM Cloud Challenge. 3 direct replies — Read more / Contribute by BrowserUk on Jul 21, 2015 at 18:07 I just read about an IBM programming challenge to try and entice developers to IBMs Bluemix Cloud development environment. (Don't bother if you're outside the UK; or if you want to use Perl (it ain't supported :(); or if stupid sign-up processes that don't work annoy you; or ... ) What struck me was that the three programming tasks are, at least notionally, so trivial. It took me less than 5 minutes to write (working, but probably not best) solutions to all three. (Whether they would pass their test criteria I guess we'll probably never know) I was also struck by this part of the description: that you can put together a programme that can run within a time limit or on limited resources rather than just lashing together a hideous brute-force monstrosity. And that you can actually read the questions properly in the first place (a useful start, but one that's often forgotten). I think it would be interesting to see how the best Perlish solutions we can come up with compare with those other languages that get entered to the competition; when and if they are actually made public. So have at them. (Don't forget to add <spoiler></spoiler> tags around your attempts.) I'd post the questions here but I'm not sure it wouldn't be a problem copyright wise? I'll post my solutions here in a few days. With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday' Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error. "Science is about questioning the status quo. Questioning authority". In the absence of evidence, opinion is indistinguishable from prejudice. I'm with torvalds on this Agile (and TDD) debunked I told'em LLVM was the way to go. But did they listen! Beyond Agile: Subsidiarity as a Team and Software Design Principle 3 direct replies — Read more / Contribute by einhverfr on Jul 20, 2015 at 21:24 This is a collection of thoughts I have been slowly putting together based on experience, watching the development (and often bad implementation of) agile coding practices. I am sure it will be a little controversial in the sense that some people may not see agile as something to move beyond and some may see my proposals as being agile. ## What's wrong with the Waterfall? I think any discussion of agile programming methodologies has to start with an understanding of what problems agile was intended to solve and this has to start with the waterfall model of development, where software has a slow, deliberate life cycle, where all design decisions are supposed to be nailed down before the code is started. Basically the waterfall approach is intended to apply civil engineering practices to software and while it can work with very experienced teams in some limited areas it runs into a few specific problems. The first is that while civil engineering projects tend to have well understood and articulated technical requirements, software projects often don't. And while cost of failure in dollars and lives for a civil engineering disaster can be high, with software it is usually only money (this does however imply that for some things, a waterfall approach is the correct one, a principle I have rarely seen argued against by experienced agile developers). The second is that business software requirements often shift over time in ways that bridges, skyscrapers, etc don't. You can't start building a 30 floor skyscraper and then have the requirements change so that it must be at least 100 floors high. Yet we routinely see this sort of thing done in the software world. Agile programming methodologies arose to address these problems. They are bounded concerns, not applicable to many kinds of software (for example software regulating dosage of radiotherapy would be more like a civil engineering project than like a business process tool), but the concerns do apply to a large portion of the software industry. ## How Agile is Misapplied Many times when companies try to implement agile programming, they run into a specific set of problems. These include unstructured code and unstructured teams. This is because too many people see agile methodologies as devaluing design and responsibility. Tests are expected to be documentation, documentation is often devalued or unmaintained, and so forth. Many experienced agile developers I have met in fact suggest that design is king, that it needs to be done right, in place, and so forth, but agile methodologies can be taken by management as devaluing documentation in favor of tests, and devaluing design in favor of functionality. There are many areas of any piece of software where stability is good, where pace of development should be slow, and where requirements are well understood and really shouldn't be subject to change. These areas cut against the traditional agile concerns and I think require a way of thinking about the problems from outside either the waterfall or agile methodologies. ## Subsidiarity as a Team Design Principle Subsidiarity is a political principle articulated a bit over a hundred years in a Papal encyclical. The idea is fairly well steeped in history and well applicable beyond the confines of Catholicism. The basic idea is that people have a right to accomplish things and that for a larger group to do what a smaller group can therefore constitutes a sort of moral theft. Micromanagement is therefore evil if subsidiarity is good but also it means that teams should be as small as possible, but no smaller. In terms of team design, small teams are preferable to big teams and the key question is what a given small team can reasonably accomplish. Larger groupings of small teams can then coordinate on interfaces, etc, and sound, stable technological design can come out of this interoperation. The small team is otherwise tasked with doing everything -- design, testing, documentation. Design and testing are as integrated with software development as they are in agile, but as important as they are in the waterfall approach. This sort of organization is as far from the old waterfall as agile is but it shares a number of characteristics with both. Design is emphasized, as is documentation (because documentation is what coordinates the teams). Stability in areas that need it is valued, but in other areas it is not. Stable contracts develop where these are important and both together and individually, accomplishments are attained. ## Subsidiarity as a Technical Design Principle Subsidiarity in team design has to follow what folks can accomplish but this also is going to mean that these follow technological lines as well. Team responsibility has to be well aligned with technological responsibility. I.e. a team is responsible for components and components are responsible for functionality. The teams can be thought of as providing distinct pieces of software for internal clients, taking on responsibility to do it right, and to provide something maintainable down the road. Once a piece of software is good and stable, they can continue to maintain it while moving on to another piece. Teams that manage well defined technical and stable components can then largely end up maintaining a much larger number of such components than teams which manage pieces that must evolve with the business. Those latter teams can move faster because they have stability in important areas of their prerequisites. But the goal is largely autonomous small teams with encapsulated responsibilities, producing software that follows that process. Advanced techniques with regex quantifiers 5 direct replies — Read more / Contribute by smls on Jul 19, 2015 at 05:29 Lately I've been experimenting again with using Perl regexes more like grammars, i.e. parsing inputs via a single big regex that involves lots of branching, instead of the traditional approach of parsing inputs via imperative "spaghetti code" that sequentially matches lots of small regexes. However, I quickly ran into two limitations relating to regex quantifiers (* + {}). Here's a write-up of the solutions/workarounds I found, both for my own benefit (so I can refer back to them), and in case others might find it interesting. Also, I'd love to hear the opinions of other monks on which of these techniques should be used in real code, and if it would be worth adding new Perl 5 core featues to make them obsolete. TOC: RFC: Net::SNTP::Server v1 2 direct replies — Read more / Contribute by thanos1983 on Jul 17, 2015 at 10:45 Dear Monks, Once again I need your expertise opinion as I am about to upload my second module related with Net::SNTP::Client. The module serves purpose of a simple SNTP server able to reply back to client requests based RFC4330 message format. In order to test the code I will provide both the client and server scripts so you can easily test and observe bugs or possible improvements based on your experience. Update: changing value(s) from "0" to "1" -RFC4330 => "1" and -clearScreen => "1" on the client code. Update 2: Modifying POD based on new findings, also the functionverify_port and last adding a new error message/restriction in checking $moduleInput{-port}. Update 3: Modifying server module based on comments from Monk::Thomas. Update 4: Modifying server module based on comments from Anonymous Monk. Client script: Server script: Server module: Thank you all in advance for your time and effort reviewing my work. Seeking for Perl wisdom...on the process of learning...not there...yet! Recamán's sequence and memory usage 3 direct replies — Read more / Contribute by Athanasius on Jul 13, 2015 at 04:41 Esteemed Monks, I was looking at The On-Line Encyclopedia of Integer Sequences (OEIS) (you know, as one does), and on the Welcome page I found a list of Some Famous Sequences, of which the first is Recamán’s sequence, defined as follows: R(0) = 0; for n > 0, R(n) = R(n-1) - n if positive and not already in the sequence, oth +erwise R(n) = R(n-1) + n. [download] What makes this sequence interesting is N. J. A. Sloane’s conjecture that every number eventually appears. Coding the sequence is simplicity itself; the challenge is to test Sloane’s conjecture by keeping track of the numbers that have not yet appeared in the series. My initial, naïve approach was to use a sieve, à la Eratosthenes: But this turned out to be far too memory-hungry: for values of MAX of the order of twenty million, RAM usage on my 3GB system approaches 100%, thrashing sets in, and the script (along with the rest of Windows) grinds to a shuddering halt. Surely, I thought, there must be a memory-efficient way to represent a sieve? And of course there is, and of course it was already implemented on CPAN. A little searching led to the Set::IntSpan module which stores runs of consecutive integers as spans, allowing large (even infinite) collections of integers to be represented very economically. Calculation of successive terms in the Recamán sequence is noticeably slower using Set::IntSpan for lookup than it is using a hash. But, as the adage says, it’s better to be late than be dead on time. (This was the slogan of an Australian safe driving ad campaign some years ago.) For the record: I also looked at Set::IntSpan::Fast and Set::IntSpan::Fast::XS. The latter failed to install on my system, and the former actually ran slower than Set::IntSpan for this use-case. Turns out that Set::IntSpan not only solves the memory problem, it also makes it possible to dispense with an upper bound for the sieve. How, then, to display progressive results? Well, the OEIS has a couple of additional series related to Recamán’s: • A064228: values of R(n) that take a record number of steps to appear: 1, 2, 4, 19, ... • A064227: the values of n corresponding to the values in A064228: 1, 4, 131, 99734, ... So I recast the script to output successive values of these two series: 14:20 >perl recaman.pl 1 <-- 1 2 <-- 4 4 <-- 131 19 <-- 99734 ... [download] Here is the new script: use strict; use warnings; use sigtrap handler => \&int_handler, 'INT', handler => \&break_handler, 'BREAK'; use Set::IntSpan; use Time::HiRes qw(gettimeofday tv_interval);$|       = 1;
my $t0 = [gettimeofday]; my$min0    = 1;
my $n = 0; my$r0      = 0;
my $missing = Set::IntSpan->new( '1-)' ); print "$min0 <-- ";

while (++$n) { my$r = $r0 -$n;
$r =$r0 + $n if$r < 0 || !$missing->member($r);

$missing->remove($r);

if ((my $min1 =$missing->min) > $min0) { print "$n\n$min1 <-- ";$min0 = $min1; }$r0 = $r; } sub int_handler { printf "\nn = %d, elapsed time: %.1fs\n",$n, tv_interval($t0); } sub break_handler { int_handler(); exit 0; } [download] This script was developed under Windows 8.1, 64-bit, using Strawberry Perl: 14:20 >perl -v This is perl 5, version 22, subversion 0 (v5.22.0) built for MSWin32-x +64-multi-thread [download] The two signal handlers allow the script to be interrupted as follows: • Control-C causes the script to display the current value of$n and the total running time of the script so far.
• Control-Break causes the script to display the same information and then exit.

My takeaways from this meditation?

First, we all know that micro-optimisation is pointless until you have first selected the best algorithm(s). But optimising an algorithm may actually consist in optimising its underlying data structures. Obvious? Yes, but still worth a reminder now and then.

Second, CPAN is awesome! But you knew that already. :-)

Cheers,

 Athanasius <°(((>< contra mundum Iustus alius egestas vitae, eros Piratica,

tied hash for data munging
3 direct replies — Read more / Contribute
by shmem
on Jul 11, 2015 at 08:21

This meditation is about a tied hash package, and how it came into existence. I am still meditating whether this is too obscure, or whether its goal is better achieved using some other technique; is it worth being uploaded to CPAN as yet another strange perl delirium? is the name ok? Any suggestions, review, critics are welcome. Thanks for your time.

### Itch

Over 100 poorly performing scripts written in some BASIC dialect for exactly the same purpose (read source records, transform them, write target records), sporting hardcoded parameters and different output assembling code, proliferating with each new customer (copy over, twiddle, tweak).

### Scratch

Perl to the rescue to do the data gathering and munging, and write a unified import CSV to be fed into that dratted basic script - one for all. Parameters and data transforming procedures should be kept separate, in a format editable by non-perlers. I choose INI file style, which fitted both that BASIC dialect and perl:

SCALAR=Some value
LIST=Foo,Bar,Baz
HASH=Foo:Bar,Baz:Quux
[download]

Straight forward. What about the data transforming rules? Since these are concatenations of values from the input record - conveniently present as a hash - and the output of some functions munging those values, these are something that could easily be transformed into subroutines:

DOMAIN=@example.com
USER=sAMAccountName.DOMAIN
PASS=md5sum(PRE.sAMAccountName.POST,15)
PRE=a874f4u
POST=ea748tyoal
MAIL=join(DOT,givenName,sn).DOMAIN
[download]

md5sum is a function using Digest::MD5::md5_hex.

With a subroutine generating subroutine using a regex, the above values are converted into the following hash:

%h = (
DOMAIN => '@example.com',
USER   => sub { $r->{sAMAcountName} .$c->{DOMAIN} },
PASS   => sub { md5sum($c->{PRE} .$r->{sAMAcountName} . $c->{POST +}, 15) }, MAIL => sub { join('.',$r->{givenName}, $r->{sn}).$c->{DOMAIN}
+},
);
[download]

where $r is the current record, and$c is the hash representing the INI files content.

So far, so good. Retrieving a hash value consisting in a subroutine is done with

$out{$key} = $h{$key}->();
[download]

but that dies if the value slot holds a scalar. Yes, I could iterate over the keys using ref or such, but I would rather want to say

%out = %h;
[download]

and have a magic hash %h which encapsulates all that logic and "knows" what to deliver.

### Relief

A tied hash. update: - which is a subroutine factory.

<update>
...due to BrowserUk's immediate reaction below: this is overkill to just execute CODE in the value slots of a hash, of course. Well, the itch was the starting point. The code generation and closure bits (see EXAMPLE in the pod) tell what it might be useful for: currying, building dispatch tables, ... - I have to play more with this, yet.
</update>

perl -le'print map{pack c,($-++?1:13)+ord}split//,ESEL' Tutorial RFC: Guide to Perl references, Part 1 5 direct replies — Read more / Contribute by stevieb on Jul 07, 2015 at 16:45 NOTE: I've been considering translating this five part series I wrote a few years ago to PerlMonks, and I thought I'd go for it to see what others thought. Parts 2-5 are still linked to the blog to give context to the entire series. If people think this should be made into a Tutorial, I'll clean up and translate the rest of the docs and link them properly. Thoughts? BEGIN TUTORIAL... Understanding references and their subtleties in Perl is one of the more difficult concepts to fully wrap one's head around. However, once they are fully understood by the blossoming developer, they find a whole new level of capability and power to exploit and explore. I often see newer programmers struggle with the concept of references on the Perl help sites I frequent. Some still have a ways to go, but many are at the stage where perhaps one more tutorial may push them over the edge and give them that 'Ahhhh' moment of clarity. My moment of clarity came when I read Randal Schwartz's "Learning Perl Objects, References & Modules" book for the something like the 8th time. Although once the concept of references is understood, the syntax and use cases can still be confusing for quite some time, especially in Perl, because There Is More Than One Way To Do It. This tutorial is the first in a five part series. This part will focus on the basics, preparing you for more complex uses in the following four parts. I've created a cheat sheet that summarizes what you'll learn in this document. • Part 1 - The basics (this document) • Part 2 - References as subroutine parameters • Part 3 - Nested data structures • Part 4 - Code references • Part 5 - Concepts put to use I will stick with a single consistent syntax throughout the series and will refrain from using one-line shortcuts and other simplification techniques in loops and other structures in hopes to keep any confusion to a minimum. Part one assumes that you have a very good understanding of the Perl variable types, when they are needed, and how they are used. Some exposure to references may also prove helpful, but shouldn't be required. THE BASICS References in Perl are nothing more than a scalar variable that instead of containing a usable value, they 'point' to a different variable. When you perform an action on a reference, you are actually performing the action on the variable that the reference points to. A Perl reference is similar to a shortcut to a file or program on your computer. When you double click the shortcut, the shortcut doesn't open, it's the file that the shortcut points to that does. We'll start with arrays, and I'll get right into the code. We'll define an array as normal, and then print out its contents. my @array = ( 1, 2, 3 ); for my$elem ( @array ){
say $elem; } [download] Prepending the array with a backslash is how we take a reference to the array and assign the reference to a scalar. The scalar$aref now is a reference that points to @array.

my $aref = \@array; [download] At this point, if you tried to print out the contents of$aref, you would get the location of the array being pointed to. You know you have a reference if you ever try to print a scalar and you get output like the following:

ARRAY(0x9bfa8c8)
[download]

Before we can use the array the reference points to, we must dereference the reference. To gain access to the array and use it as normal, we use the array dereference operator @{}. Put the array reference inside of the dereference braces and we can use the reference just as if it was the array itself:

for my $elem ( @{$aref } ){
say $elem; } [download] The standard way of assigning an individual array element to a scalar: my$x  = $array[0]; [download] To access individual elements of the array through the reference, we use a different dereference operator: my$y = $aref->[1]; [download] Assign a string to the second element of the array in traditional fashion: $array[1]  = "assigning to array element 2";
[download]

To do the same thing through an array reference, we dereference it the same way we did when we were taking an element from the array through the reference:

$aref->[1] = "assigning to array element 2"; [download] You just learnt how take a reference to an array (by prepending the array with a backslash), how to dereference the entire array reference by inserting the reference within the dereference block @{}, and how to dereference individual elements of the array through the reference with the -> dereference operator. That is all there is to it. Hashes are extremely similar. Let's look at them now. Create and initialize a normal hash, and iterate over its contents: my %hash = ( a => 1, b => 2, c => 3 ); while ( my ($key, $value ) = each %hash ){ say "key:$key, value: $value"; } [download] Take a reference to the hash, and assign it to a scalar variable: my$href = \%hash;
[download]

Now we'll iterate over the hash through the reference. To access the hash, we must dereference it just like we did the array reference above. The dereference operator for a hash reference is %{}. Again, just wrap the reference within its dereferencing block:

while ( my ( $key,$value ) = each %{ $href } ){ say "key:$key, value: $value"; } [download] Access an individual hash value: my$x = $hash{ a }; [download] Access an individual hash value through the reference. The dereference operator for accessing individual elements of a hash through a reference is the same one we used for an array (->). my$y = $href->{ a }; [download] Assign a value to hash key 'a': $hash{ a }  = "assigning to hash key a";
[download]

Assign a value to hash key 'a' through the reference:

$href->{ a } = "assigning to hash key a"; [download] That's essentially the basics of taking a reference to something, and then dereferencing the reference to access the data it points to. When we operate on a reference, we are essentially operating on the item being pointed to directly. Here is an example that shows, in action, how operating directly on the item has the same effect as operating on the item through the reference. my @b = ( 1, 2, 3 ); my$aref = \@b;

# assign a new value to $b[0] through the reference$aref->[0] = 99;

# print the array

for my $elem ( @b ){ say$elem;
}
[download]

Output:

99
2
3
[download]

As you can see, the following two lines are equivalent:

$b[0] = 99;$aref->[0] = 99;
[download]

CHEAT SHEET

Here's a little cheat sheet for review before we move on to the next part in the series.

my @a = ( 1, 2, 3 );
my %h = ( a => 1, b => 2, c => 3 );

# take a reference to the array
my $aref = \@a; # take a reference to the hash my$href = \%h;

# access the entire array through its reference
my $elem_count = scalar @{$aref };

# access the entire hash through its reference
my $keys_count = keys %{$href };

# get a single element through the array reference
my $element =$a->[0];

# get a single value through the hash reference
my $value =$h->{ a };

# assign to a single array element through its reference
$a->[0] = 1; # assign a value to a single hash key through its ref$h->{ a } = 1;
[download]

This concludes Part 1 of our Guide to Perl references. My goal was not to compete with all the other reference guides available, but instead to complement them, with the hope that perhaps I may have said something in such a way that it helps further even one person's understanding. Next episode, we'll learn about using references as subroutine parameters.

The problem of documenting complex modules.
11 direct replies — Read more / Contribute
by BrowserUk
on Jul 05, 2015 at 04:41

This is meditation; but I also hope that it might start a discussion that will come up with (an) answers to what I see as an ongoing and prevalent problem.

This has been triggered at this time by my experience of trying to wrap my brain around a particular complex module; but I don't want to get into discussion particular to that module, so I won't be naming it.

Suffice to say that CPAN is replete with modules that are technically brilliant and very powerful solutions to the problems they address; and that deserve far wider usage than they get.

In many cases the problem is not that they lack documentation -- often quite the opposite -- but more that they don't have a simple in; a clearly defined and obvious starting point that gives a universal starting point on which the new user can build.

And example of (IMO) good documentation is Parallel::ForkManager. It's synopsis (I've tweaked it slightly to remove a piece of unnecessary fluff):

use Parallel::ForkManager;

my $pm = Parallel::ForkManager->new($MAX_PROCESSES);

foreach my $data (@all_data) { # Forks and returns the pid for the child: my$pid = $pm->start and next; ... do some work with$data in the child process ...

$pm->finish; # Terminates the child process } [download] is sufficient to allow almost anyone needing to use it, for almost any purpose, to put together a reasonable working prototype in a dozen lines of code without reading further into the documentation. It allows the programmer to get started and move forward almost immediately on solving his problem -- which isn't "How to use P::FM" -- and only refer back to and utilise the more sophisticated elements of P::FM, as and when he encounters the limitations of that simple starting point. As such, the module is successful in hiding the nitty-gritty details of using fork correctly; whilst imposing the minimum of either up-front learning curve or infrastructural boiler-plate upon the programmer; who has other more important (to him) things on his mind. Contrast that with something like POE which requires a month of reading through the synopsis of the 800+ modules in that namespace POE::*, and then another month of planning, before the new user could put together his first line of code. As powerful as that module, suite of modules; dynasty of modules is, unless you have the author's help, and lots of time, getting started is an extremely daunting process. In that respect (alone perhaps), POE fails to enable a 'simple in'. And before anyone says that it is unfair to compare those two modules -- which maybe true -- the purpose was to pick extremes to make a point; not to promote or denigrate either. Another module that I know I should have made much more use of in the type of code I frequently find myself writing, is PDL. I've tried at least a dozen times to use PDL as a part of one of my programs; and (almost) every time I've abandoned the attempt before ever writing a single line of PDL, because I get frustrated by the total lack of a clear entry point in to the surfeit of documentation. There's the FAQ, and the Core; and the Index; and the QuickStart; and the Doc; and the Basic; and the Lite; and the Course; and the Philosophy; and the pdldoc; and the Tips; and ... I'm outta here. I'm trying to write my program, which does a little math on some biggish datasets that would benefit from being vectored, but life's too short... Again; the underlying code is brilliant (I am assured), and it isn't a case of a lack of documentation; just a mindset that says: "this is PDL in all its glory, power and nuance. bathe yourself in its wonderfulness and wallow in its depth". Oh, and then when you've immersed yourself in its glory, understood its philosophy, and acclimated its nuance, then you can get back to working out how to use it to solve your problem. And that's a real shame; and a waste. I'm not sure what the solution is. I do know that the modules I use most List::Util, Data::Dump, threads etc. I have rarely ever had to look at the documentation; their functionality has (for me) become an almost invisible extension of Perl itself, and only the occasional (perhaps you forgot to load "sum"?) reminds me that they aren't. Of course, what they do is essentially pretty simple; but that in itself is a perhaps a clue. I do know that (for me) the single most important thing in encouraging my use of a module is being able to C&P the synopsis into my existing program, tweak the variable names, and have it do something useful immediately. In part, that comes down to a well designed API; in part, to well-chosen defaults; and part having a well-chosen, well-written synopsis that addresses the common case; with variable names and structure that make it obvious how to adapt that synopsis for the common case. Once I have something that compiles and runs -- even if it doesn't do exactly what I need it to do; or even what I thought it would do from first reading -- it gives me a starting point and something to build on. And that encourages me to persist. To read the documentation on a as-I-need-to basis to solve particular problems as I encounter them. Over a decade ago, I posted My number 1 tip for developers.; and this is the other side of that same philosophy. Start simple and build. And that I think has to be the correct approach to documenting complex modules. They need to: 1. Offer a single, obvious, starting point. The in. 2. That needs to be very light on history, philosophy, jargon, technical and social commentary and background. And choice. 3. It needs to offer a single, simple, well-chosen, starting point, that requires minimal reading to adapt to the users code, for the common case. 4. It then needs to offer them a quick, clear, simple path to solving their problem. What it must not do: • It mustn't present them with 'a bloody great big list of entrypoints/methods'. • It mustn't offer them a myriad of choices and configuration options. • It mustn't take them on a deep immersion in the details of either algorithms or implementation. • It mustn't present them with either "Ain't this amazing" nor "Ain't I clever" advert. • It mustn't waste their time with details of your personal preferences, prejudices, philosophies and theologies. If you want programmers to use your modules, you need to tell them what (the minimum) they *NEED TO KNOW* to get started. And then give a clear index to the variations, configurations and extensions to that basic starting point. Achieve that, give them their 'in', with the minimum of words, fuss or choice, and they'll come back for all the rest as they need it. This is ill-thought through and incomplete, so what (beyond risking offending half the authors on CPAN) am I trying to achieve with this meditation? I'd like to hear if you agree with me? Or how you differ. What you look for in module documentation. Examples that you find particularly good; or bad. It'd be nice to be able to derive from the thread, a set of consensus guidelines to documenting moderate to complex modules -- that almost certainly won't happen -- but if we managed to get a good cross section of opinions on what makes for good and bad documentation; and a variety of opinions of the right way to go about it; it might provide a starting point for people needing to do this in the future. With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday' Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error. "Science is about questioning the status quo. Questioning authority". In the absence of evidence, opinion is indistinguishable from prejudice. I'm with torvalds on this Agile (and TDD) debunked I told'em LLVM was the way to go. But did they listen! Writing multiple Excel::Writer::XLSX worksheets in parallel (3rd and final attempts) 4 direct replies — Read more / Contribute by marioroy on Jul 04, 2015 at 23:16 July 22, 2015. The example was updated to work with MCE in trunk. My 1st and 2nd attempts got me warmed up and thought faster is possible. The following demo is my 3rd attempt and writes 1 million cells combined in less than 6 seconds from start to finish and 57 seconds for 10 million cells. Running serially takes 15 and 141 seconds for 1 and 10 million cells respectively. Processors have turbo boost for some time. Thus, serial code is likely to run at a faster GHz.  for ( 1 .. 111_111 ) { ... } # 3 * 3, 1 million for ( 1 .. 1_111_111 ) { ... } # 3 * 3, 10 million [download] Writing text data will slow this down a little due to obtaining the next unique id from the shared strTable object. The internal str_table is shared between worksheets in Excel::Writer::XLSX. Thus, synchronization is necessary as well. Note: This requires MCE from trunk r957 or later which includes MCE::Shared as MCE 1.700 is not yet released. The logic consumes only the memory necessary. There is never duplicate data from running multiple workers. #!/usr/bin/env perl use strict; use warnings; # --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- package StrTable; sub new { my ($class, $self) = ( shift, { table => {}, unique => 0 } ); bless$self, $class; } sub table {$_[0]->{table } }
sub unique { $_[0]->{unique} } sub value { if (exists$_[0]->{table}->{ $_[1] }) {$_[0]->{table}->{ $_[1] }; } else {$_[0]->{table}->{ $_[1] } =$_[0]->{unique}++;
}
}

# --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
package main;

use Archive::Zip ();
use File::Copy qw(move);
use File::Find ();
use File::Temp (); $File::Temp::KEEP_ALL = 1; use Excel::Writer::XLSX; use MCE::Signal qw($tmp_dir);
use MCE::Loop 1.699;
use MCE::Shared;

my $nodeList = [ [ 'AMS' , 'a' ], [ 'APJ' , 'ap' ], [ 'EMEA', 'e' ] ]; my$strTable = mce_share( new StrTable );
my ($center,$format);

{  # Override _get_shared_string_index to synchronize str_table update
+s
no warnings 'redefine';

sub Excel::Writer::XLSX::Worksheet::_get_shared_string_index {
my ($self,$str) = (shift, shift);
if ( not exists ${$self->{_str_cache} }->{$str} ) {${ $self->{_str_cache} }->{$str} = $strTable->value($str);
} else {
${$self->{_str_cache} }->{$str}; } } } sub init_wb { my ($wn, $file) = (shift, shift); # Increment$wn by 1 since worksheet xml files begin at 1
$wn++; mkdir "$tmp_dir/$wn"; my$wb = Excel::Writer::XLSX->new($file || "$tmp_dir/$wn/tmp.xlsx") +;$wb->set_tempdir("$tmp_dir/$wn");

# Set workbook properties
$wb->set_properties( title => 'Node List', author => 'L_WC demo', comments => 'Node List', ); # Define/add formats to the workbook$center = wb->add_format(align => 'center');format = wb->add_format(align => 'center', bg_color => 44); # Add worksheets, specify formats for columns/rows for (0 .. @{nodeList } - 1) {
$wb->add_worksheet($nodeList->[$_][0]);$wb->sheets($_)->set_column(0, 4, 15,$center);
}

return $wb; } sub close_wb { my$wb = shift;
MCE->sync();         # Wait for others to complete, important

$wb->{_str_table } =$strTable->table();   # Replace str_table
$wb->{_str_total } = 0+$strTable->unique();  # Update  str_total
$wb->{_str_unique} = 0+$strTable->unique();  # Update  str_unique

$wb->close(); # Close workbook } sub merge_wb_data { my$wb_file = shift;
my ($zip, @pths, @xlsx_files) = (Archive::Zip->new()); local ($@, $!,$^E, $?); # Other files, e.g. table data likely need the same and not done # for this demonstration. Just worksheet files are merged. # I received help by reading _store_workbook inside # Excel::Writer::XLSX::Workbook.pm. # Find worksheet files 2,3,... for my$_num (1 .. @{ $nodeList }) { my$wanted = sub {
push @pths, $1 if$File::Find::name =~ /(.*)\/sheet$_num\.xml +/; }; File::Find::find({ wanted =>$wanted, untaint => 1, untaint_pattern => qr|^(.+)$+| }, "$tmp_dir/$_num"); } # Move worksheet files 2,3,... to where worksheet 1 data resides for (0 .. @pths - 1) { unlink$pths[$_]."/../../../tmp.xlsx"; if ($_ > 0) {
my $_num =$_ + 1; unlink $pths[0]."/sheet$_num.xml";
move $pths[$_]."/sheet$_num.xml",$pths[0]."/sheet$_num.xml"; } } # Re-zip xlsx files my$wanted   = sub { push @xlsx_files, $File::Find::name if -f }; my$temp_dir = $pths[0]."/../../"; my$short_name;

File::Find::find({
wanted => $wanted, untaint => 1, untaint_pattern => qr|^(.+)$|
}, $temp_dir); for my$file_name (@xlsx_files) {
$short_name =$file_name;
$short_name =~ s{^\Q$temp_dir\E/?}{};
$zip->addFile($file_name, $short_name); } open my$fh, '>', $wb_file or die "Error opening xlsx file:$!\n";
binmode $fh;$zip->writeToFileHandle($fh); close$fh;
}

# --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---

MCE::Loop::init(
max_workers => scalar(@{ $nodeList }), chunk_size => 1, posix_exit => 1, use_threads => 0, ); mce_loop { my ($region, $sql) = ($_->[0], $_->[1]); my ($wb, $ws); # Acquire data from the DB. Each worker must obtain a handle. # The DB logic is similar to running serially. Just the where # clause is likely unique for each region. # Fill worksheet rows/cells if ($region eq 'AMS') {
$wb = init_wb(0);$ws = $wb->sheets(0);$ws->write(0, 2, 'foo', $format); for ( 1 .. 111_111 ) {$ws->write(0 + $_, 0, 1000 +$_);
$ws->write(1 +$_, 2, 2000 + $_);$ws->write(2 + $_, 4, 3000 +$_);
}
print "AMS  ---- DONE.\n";
}
elsif ($region eq 'APJ') {$wb = init_wb(1); $ws =$wb->sheets(1);
$ws->write(0, 2, 'bar',$format);
for ( 1 .. 111_111 ) {
$ws->write(0 +$_, 0, 4000 + $_);$ws->write(1 + $_, 2, 5000 +$_);
$ws->write(2 +$_, 4, 6000 + $_); } print "APJ ---- DONE.\n"; } elsif ($region eq 'EMEA') {
$wb = init_wb(2);$ws = $wb->sheets(2);$ws->write(0, 2, 'baz', $format); for ( 1 .. 111_111 ) {$ws->write(0 + $_, 0, 7000 +$_);
$ws->write(1 +$_, 2, 8000 + $_);$ws->write(2 + $_, 4, 9000 +$_);
}
print "EMEA ---- DONE.\n";
}

close_wb($wb) if$wb;

} $nodeList; # Shutdown MCE MCE::Loop::finish(); # Merge data into one workbook merge_wb_data('Node_List.xlsx'); print "Node List is Done.\n"; [download] Kind regards, Mario RFC: Net::SNTP::Client v1 5 direct replies — Read more / Contribute by thanos1983 on Jun 30, 2015 at 12:45 Hello Everyone, About a year ago I started with the idea of creating a Perl module based on Net::NTP. The module that I am thinking to create would be named (Net::SNTP::Client). The difference between those two is the precision, from my point of view the Net::NTP module does not get correct millisecond/nanosecond precision. The module is based on RFC4330, where according to the RFC different precision will achieved on LinuxOS and WindowsOS. In theory the module should be compatible with all OS (WindowsOS, LinuxOS and MacOS) please verify that with me since I only have LinuxOS. I am planning to create also another module Net::SNTP::Server which is approximately an SNTP server and when I say approximately is because I can not figure it out how to replicate the server side. But any way first thing first. Is it possible to take a look and assist me in possible improvements and comments. Since this is my first module I have no experience so maybe the module is not well written. The execution of the script is very simple, create a script e.g. client.pl and put the code bellow. client.pl I have inserted four options:  -hostname => NTP Hostname or NTP IP -port => 123 Default or Users choice e.g. 5000 -RFC4330 => 1 -clearScreen => 1 [download] The first option is to get an RFC4330 printout way, and the second option is to clear the screen before the printout. I think both options will be useful on the printout of the script. I have chosen to paste the module in the folder path "/home/username/Desktop/SNTP_Module/Net/SNTP/Client.pl". Remember for testing purposes to change the path on client.pl accordingly on the location that you will place the module. Update 1: Removing (EXPORT_OK, EXPORT_TAGS, shebang line) based on toolic comments. Update 2: Removing$frac2bin unused sub.

Update 3: Adding some checks on the input of getSNTPTime sub

Update 4: Adding Plain Old Documentation format and updating code based on Monk::Thomas comments.

Update 5: Updating code based on Monk::Thomas new comments.

Update 6: Updating code, with new updated Plain Old Documentation.

Net::SNTP::Client.pm

Seeking for Perl wisdom...on the process of learning...not there...yet!
Software Projects In Real Life: "I See Dead People"
8 direct replies — Read more / Contribute
by sundialsvc4
on Jun 23, 2015 at 08:44

The title of this Meditation comes, of course, from the punch-line of a really bad movie with a classic O’Henry Ending:

But in many ways, it also sums up my career.   (Polite pause as the twitter of laughter dies down.)   For most of the past 25 years or so, I’ve been involved in projects.   Generally, not ones that I had started.   Generally, not healthy ones.   Dead ones, or very nearly so.   My task was to try to “turn them around,” and I generally did.   Whether or not my attempts at resuscitation were actually long-term successful, this experience did teach me a lot of the reasons ... and they are human reasons ... why software projects so often go so badly wrong.   I’m not going to do any preaching here, although it may seem so.   I’m just relating some of my personal experiences in a mortuary project triage.   (FYI:   Teams-in-place were anywhere from one to fifteen people, most of whom had “split the coop.”)

First of all, these projects typically started out with “a great deal of enthusiasm, but no real plan.”   The usual justification was that the project needed to “hurry up to market,” or that the stakeholders in the project “would know it when they saw it” and the managers of the project (if there were any ...) simply gave-up trying to ask them to make up their minds.

And, of course, in several cases, those stakeholders were assured that they didn’t have to make up their minds.   “Self-directed teams,” the programmers purred self-confidently, “would produce a ‘potentially viable product(!)’ every two weeks!

“SOP = SOTP.™”   Standard Operating Procedure = Seat Of The Pants.

And yet, what happened ... what inevitably happened ... is that everything in the “software mechamism” turned out to be inextricably coupled to everything else.   As layers of code were piled on, and as changes pinged-and-ponged throughout all those layers, the whole thing fell down in a heap as the programmers sailed on to the next green pasture.

Many software projects are actually the work of one Guy.   (Sorry, ladies ...)   That “one guy” might be surrounded by several other people, but this is simply an attempt to scale-up the only modus operandi that this One Guy actually knows:   himself.   The project “feels its way along” because that’s how he’s used to doing it.   (And, because he is a crackerjack programmer, is used to eventually succeeding producing something.)   There simply isn’t any experience in being part of a successfully managed project:   most programmers, I candidly suspect, haven’t actually seen one.   (And there were no Angelic Choirs that started singing when I showed up either, I’m afraid ... no self-sunshine here.)

The underlying reason for these problems, I think, is:   a very natural human reaction to what is a virtually-unmanageable technical situation.   The objective of the project is to build a self-directing machine ... and to do it perfectly, because nothing less than perfection will do.   Viewed as a mechanism, software would be said to have “unlimited degrees-of-freedom.”   i.e. “Anything is connected to everything else.”   Although the instant-to-instant flow of control within the software is of course described by if/then/else and looping constructs, the actual mechanism is also determined by its internal and external state.   This concern for “state” is what causes the coupling.   (And it’s also one of the reasons why “Functional Programming” is such a hot research topic.)

My biggest criticism of Scrum, and Agile, and XP, and, well, most “methodologies,” is that they ignore this aspect.   They focus, instead, upon the organization and the daily work-activities of the team.   They discuss things like “user stories,” which are simply one possible way of trying to express one’s ideas and plans to a customer, but then omit from consideration exactly how that “story” is to become if/then/else, and how that new web of decision-logic is to be tested, and how it both affects and is affected by (“is infinitely coupled to ...”) everything else.   As a paradigm, useful in one sense though it may be, it does not and probably cannot (IMHO) go far enough.

“We are building a self-directing machine.”   That, quite frankly, is the light-bulb moment that I got from the Managing the Mechanism e-book.   It’s something that we can say to business stakeholders, except that it is extremely likely to scare them off.   It certainly does, I think, cast some useful insights on what we might be missing in our present-day methodologies.   We certainly do need better processes for our work, better ways to describe them, and better ways to inform stakeholders of exactly what we need from them and why.

In closing, one of the most prevalent things that I have seen, in every project that I have tried to turn-around, is disillusionment.   On both sides of the aisle.   Long before the software had broken down, communication had also broken down, and so had business process (if it ever truly existed).   No one builds houses and bridges that way.   (For very obvious, flammable and heavy reasons, no one is allowed to ...)   I suspect that the seeds of project failure are sown almost as soon as the first plow-blade cuts the soil.   This is our problem, as a profession, and we need a better solution to it.   Perhaps a different viewpoint is a start.

That’s my Meditation.   Borne, as I said, from a most-interesting career path that has not always been a happy one.   What do you think?   What have your experiences been?   For instance, have you worked-through a spectacular success story from one of these other strategies?   I’d love to hear it . . .   The water in the cooler is ice-cold and there’s beer in the fridge that’s even colder.   May the discussions begin?

text here (a paragraph)

and:  code here to format your post; it's "PerlMonks-approved HTML":

• Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
• Read Where should I post X? if you're not absolutely sure you're posting in the right place.
• Posts may use any of the Perl Monks Approved HTML tags:
a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
• You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
 For: Use: & & < < > > [ [ ] ]
• Link using PerlMonks shortcuts! What shortcuts can I use for linking?

Create A New User
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others perusing the Monastery: (16)
As of 2015-08-05 13:52 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?