Beefy Boxes and Bandwidth Generously Provided by pair Networks
There's more than one way to do things
 
PerlMonks  

The Monastery Gates

( #131=superdoc: print w/ replies, xml ) Need Help??

Donations gladly accepted

If you're new here please read PerlMonks FAQ
and Create a new user.

New Questions
How do I go from procedural to object oriented programming?
8 direct replies — Read more / Contribute
by Lady_Aleena
on Apr 20, 2015 at 16:43

    (Jenda corrected me on what my subroutines are. They are procedures not functions. The title of this node and where ever the word, or any derivatives of, function has been changed to procedure.)

    Going from procedural programming to object oriented programming has been on my mind a lot lately. I have been told by a couple of people my code is very close to being OO, however when I gave OO a try the first time, I was told I was doing it all wrong. I would like to see how close I am, but I am having a hard time learning objects because the tutorials I have found start writing code right away. I have yet to find a tutorial which starts with the objective of the objects being written. For example I want...

    Criminal Minds is a 2005 television series which is still running.

    Iron Man is a 2008 film based on comics by Marvel Comics.

    The tutorials also do not show the data being munged up front like...

    They all start right in on the objects, leaving me completely in the dark about what the end goal for the objects is. From above you know my objective and have the data to reference while reading the procedures I wrote to get to my objective. The whole module is here if you would like to see the bigger picture.

    Now putting it all together to get my objective.

    I would like to know how close I am to having objects, what they would look like if I am close, and if there is anything which does not fit into OO. Is there anywhere I need to change my thinking (which will be hard since I have been doing things as above for a long while now)?

    If any of the OO tutorials were written with the objective, data, code, and wrap-up in that order; I might get them.

    Another reason I am doing this is to get my mind off of my pain and impending surgery. I am doing everything I can think of to get my mind off of them and stave off panic. Would you please help me?

    I hope I am not asking too much. Please sit back and enjoy a cookie.

    No matter how hysterical I get, my problems are not time sensitive. So, relax, have a cookie, and a very nice day!
    Lady Aleena
Creating Perl5 plugin for Intellij IDEA
7 direct replies — Read more / Contribute
by hurricup
on Apr 19, 2015 at 14:41

    Recently i've tried different IDEs for perl5 and this experience was... sad...

    Before that day i've developed for a long-long time using NP++, no kidding. I'm windows user so no vim or smth.

    Padre - seems really cool idea, but unstable and looks abandoned.

    Currently working with Komodo, but it's far from perfect, got bugs and even than - costs money.

    From my own experience, the most powerful and convenient IDEs are Microsoft VS and Intellij IDEA, so I decided to try to write plugin for IDEA.

    First and main problem with perl IDE (as I can see) is too free syntax. No BNF, no nothing. You can't use some generators to make lexer and parser for you. They may help, but only help.

    I've tried to dig into perl sources and port toke.c into Java. Bad idea. Too much legacy code, too much little things that i don't understand. And porting of every function adds few more functions to port. And some of them requires very deep understanding of what is going on inside it. So i've dropped the idea and started from scratch.

    Currently, i'm building lexer using JFlex and i think it's going pretty well. Some coloring already works.

    Now i need to improve lexer, start to write a parser and implement IDEA features. There is a lot of work to do, so if anyone interested in such plugin and would like to participate, you are welcome: https://github.com/hurricup/Perl5-IDEA

    You may also contact me via skype: hurricup or my gmail email: hurricup.

    I belive that together we can do it.

    P.S. To participate you should have some knowlege of perl and Java.

    P.S.S. If you have no experience with Java but pretty familiar with Perl's insides - your consultations would be appreciated.

Perl koan #2300 (Perl in the browser)
4 direct replies — Read more / Contribute
by rje
on Apr 17, 2015 at 13:46

    Perl koan n. A puzzling, often paradoxical statement or suggestion, used by Perl hackers as an aid to meditation and a means of gaining spiritual awakening or something.


    For the moment, don't think about the obstacles.

    Imagine you could plug in Perl (perhaps defaulted to strict mode and built-in with Mo(o|u)(se)?) onto your browser when JavaScript gets too painful.

    What possible benefits might that give? Assuming first that JavaScript is difficult for enterprise-scale apps (hence the existence of Dart, and frameworks like AngularJS), it seems that Perl would fill the part -- and has been stable for years, and has a wide user base.

    Obstacles aside, can you see the utility and beauty of Perl5 on every browser - not just Opera?

Is it possible to localize the stat/lstat cache?
5 direct replies — Read more / Contribute
by bounsy
on Apr 17, 2015 at 10:08

    I have a function that gets called a lot (a wanted function for File::Find going against large numbers of files). To avoid constantly hitting the disk, I do one stat/lstat and then use -X _ repeatedly after that (using the cached results of the last stat/lstat).

    In some cases, I need to call other functions that need to be able to use stat/lstat, which will overwrite the cached results in _. The actual calls to these other functions are uncommon in frequency (exception handling, essentially), but there are many places in the main function that might need to call them.

    Ideally, I would like to be able to localize the cached results of the stat/lstat call in some way (in the called functions). Is there a way to do this? (Note that local _; doesn't compile and local *_; doesn't work.) Is there a better and/or more generic approach?

    If I can't localize in any way, one approach I'm considering is caching the results I need in a hash. For example:

    #NOTE: Depending on certain conditions, I need either stat or lstat. if (...) { stat($Filename) } else { lstat($Filename) } #Current code uses this format: # if (-X _) {...} #Possible new code (including only the tests I need to use): my %Stat = ( r => (-r _), w => (-w _), x => (-x _), s => (-s _), ... ); if ($Stat{r}) {...}

    Thoughts?

Populating a hash-ref from multiple arrays with slice?
4 direct replies — Read more / Contribute
by Anonymous Monk
on Apr 16, 2015 at 13:33
    Hi perl monks. I am stumped with this one.

    I have several arrays that I want to slice into a single hash ref. Any tips? I read a bit about creating a hash slice from 2 arrays, but can't figure out the syntax for creating a hash ref slice from multiple arrays.

    For example (excuse grammatical errors -- this is more pseudo code than anything):

    $hash_ref; @array1 = "some_unique_key1 some_unique_key2 some_unique_key3"; @array2 = "meta_data_1 metadata_2 metadat_3"; @array3 = "submitted_date1 submitted_date2 submitted_date3";

    The hash ref would have a structure like:

    $some_uniquekey1 --> 'Metadata' --> $meta_data1 = '1' --> 'Submitted Date' --> $subitted_date1 = '1' $some_uniquekey2 ... etc $some_uniquekey3 ... etc

    Basically the arrays value indices all correspond to each other -- I just want to merge them all into an accessible table.

    Any tips or slaps up the side of my head appreciated!

Approximate logarithmic statistical module: new() params naming
1 direct reply — Read more / Contribute
by Dallaylaen
on Apr 16, 2015 at 02:37

    Suppose we have a module for approximate memory-efficient statistical analysis which stores data in a set of logarithmic bins. However, around zero, depending on the data, it may be suitable to switch to linear interpolation (as in "no measurement is absolutely precise, why use so many bins").

    For now, the proposed new() interface (has not been released to CPAN) is as follows:

    • base - base of logarithmic intervals, i.e. upper_bound/lower_bound;
    • precision - width of linear intervals around zero;
    • zero_thresh - optional threshold at which to switch to linear, normally it's calculated dynamically (we want to switch where the linear and logarithmic intervals are of the same width).

    I'm ok with my data model, but the parameter names seem a bit weird.

    I would like to rename them to relative_precision, absolute_precision, and linear_threshold respectively. Does that look clear enough?

    I was also thinking of absolute/relative error, but error is really variable and no more than half the precision. I think this could cause additional WTF.

    Are there any better ideas?

    The module in question is Statistics::Descriptive::LogScale. Here's the previous discussion.

sudo ignoring string entry after first space encountered
3 direct replies — Read more / Contribute
by perl197
on Apr 14, 2015 at 17:12

    Hi all. I have a production environment that requires the use of sudo to execute perl scripts by multiple users. (emulating a production account, not root) A perl script accepts a where clause as an input parameter. I.e, sudo myperlscript.pl -cs CONNECTIONSTRING -w "fieldvalue in ('111','222')" In myperlscript.pl, w is defined as string ("w=s" => \$where_condition) However if printing $where_condition, the response is simply fieldvalue with the rest of the string stripped off after the first space. This causes an error when executing the combined sql statement. My query to the Monks community; Is there an escape sequence or quote combination you can recommend to allow the entire string to be passed to the variable? I can enter the where condition to and read from a file, but would like to solve this riddle with a less cumbersome work around. Any suggestions are welcome.

Undefined value as an ARRAY reference, JSON data
2 direct replies — Read more / Contribute
by johnfl68
on Apr 14, 2015 at 13:35

    Hello, I am reading data from JSON, from an external API source.

    On some occasions, one of the Arrays in the JSON is not present, because there is no applicable data. This causes the Undefined value as an ARRAY reference error.

    I've been looking around for a while for a way to get some working solution out of the error, so I know to just not process that data. But every post I have read makes the assumption that the data is flawed because the array I am looking for should always be there, which isn't the case.

    It's like a catch 22, I need to read it to check and see if it is defined, but I can't read it because it is undefined.

    Can anyone please point me in the right direction to handle this error? I am sure it is something simple that is just escaping me.

    Thank you!

New Meditations
Want to make an Everything site of your own?
1 direct reply — Read more / Contribute
by thomas895
on Apr 20, 2015 at 03:48

    Ever wanted to experiment with the engine PerlMonks is built on? I did, but it's rather difficult to install, so I thought I'd write this for anyone who wanted to give it a go themselves.

    My Perl is v5.16.2. YMMV for others.

    Requirements:

    • A MySQL/MariaDB server

    • GNU patch

    • C compiler

    • Some knowledge of how Apache works

    Estimated time: a quiet evening or so

    1. Download Apache 1.3.9 and mod_perl 1.14 from your nearest mirror, then unpack them. You may use other versions, but this guide won't apply to them.

    2. I wanted to install it all to my home directory. I ran mod_perl's Makefile.PL like so:

      perl Makefile.PL APACHE_SRC=../apache_1.39/src APACHE_PREFIX=$HOME/opt/apache1.3.9 DO_HTTPD=1 USE_APACI=1 EVERYTHING=1 PREFIX=/home/thomas/perl5
      Adjust as needed.
    3. If you have a relatively new version of gcc and a Perl v5.14 or newer, you will need to make some changes to the source. Apply this patch file to the mod_perl directory, and this one to the apache directory. It does the following (you can skip these details if you want):

      • In v5.16, $<; and friends are no longer cached. I just tried removing the problematic section that used these variables, and that seemed to work. You might not be able to run your server as root (which requires being able to change its own permissions), but I haven't checked.

      • For some reason, it was a trend to supply your own version of getline (maybe the libc one was broken, haven't looked it up) in those days. In any case, gcc complains about it, so I updated all of the code to use the Apache one. (it only affects the password utility, which is not really needed in our case, but it does cause make to fail)

      • In v5.14, you can't use the result of GvCV and friends as lvalues anymore, so I replaced the places where something was assigned to the result of that function with the appropriate _set macro, as the delta advises.

    4. Run make and make install, and go make some coffee. You can make test, too, but then also grab a sandwich.

    5. Try to start Apache as make install's instructions suggest, to make sure it works. You may need to choose a different port number, do so with the Listen and Port options in httpd.conf

      • If you installed Apache locally, you will need to modify apachectl and/or your shell startup script: make sure that the PERL5LIB environment variable is set to where mod_perl's Perl libraries are installed.

      Now for Everything else...

    6. Download this, unpack it, and follow QUICKINSTALL up to (but not yet including) the install_esite

      • When running Makefile.PL, if you want to install locally, don't forget to set PREFIX accordingly.

      • It is not necessary to let it append things to your httpd.conf, in a later step I'll show you why and what to do instead.

    7. If you have a modern mysql/mariadb, some of the SQL scripts won't work. Here is another patch to fix them.

      • It mostly has to do with the default values of the integer columns: by getting rid of the default value of a quoted zero, mysql accepts it.

      • There is also a timestamp column that has a size in the script, but mysql doesn't like that, so by getting rid of it, it works again.

    8. Now run install_esite, as QUICKINSTALL says.

    9. For some reason, index.pl only showed up as text, perhaps due to the other mod_perl settings I'd been playing with, or perhaps it was something else. I added this to httpd.conf, and then it worked:

      PerlModule Apache::Registry PerlModule Apache::DBI PerlModule CGI <Files *.pl> SetHandler perl-script PerlHandler Apache::Registry Options +ExecCGI PerlSendHeader On PerlSetupEnv On </Files>
    10. (Re)start Apache, visit /index.pl, and have lots of fun!

    If something doesn't work for you, post it below.

    Happy hacking!


    Edit: forgot the third patch

    -Thomas
    "Excuse me for butting in, but I'm interrupt-driven..."
    Did you know this software was released when I was only 3 years old? Still works, too -- I find that amazing.
Data-driven Programming: fun with Perl, JSON, YAML, XML...
6 direct replies — Read more / Contribute
by eyepopslikeamosquito
on Apr 19, 2015 at 04:41

    The programmer at wit's end for lack of space can often do best by disentangling himself from his code, rearing back, and contemplating his data. Representation is the essence of programming.

    -- from The Mythical Man Month by Fred Brooks

    Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.

    -- Rob Pike

    As part of our build and test automation, I recently wrote a short Perl script for our team to automatically build and test specified projects before checkin.

    Lamentably, another team had already written a truly horrible Windows .BAT script to do just this. Since I find it intolerable to maintain code in a language lacking subroutines, local variables, and data structures, I naturally started by re-writing it in Perl.

    Focusing on data rather than code, it seemed natural to start by defining a table of properties describing what I wanted the script to do. Here is a cut-down version of the data structure I came up with:

    # Action functions (return zero on success). sub find_in_file { my $fname = shift; my $str = shift; my $nfound = 0; open( my $fh, '<', $fname ) or die "error: open '$fname': $!"; while ( my $line = <$fh> ) { if ( $line =~ /$str/ ) { print $line; ++$nfound; } } close $fh; return $nfound; } # ... # -------------------------------------------------------------------- +---- # Globals (mostly set by command line arguments) my $bldtype = 'rel'; # -------------------------------------------------------------------- +---- # The action table @action_tab below defines the commands/functions # to be run by this program and the order of running them. my @action_tab = ( { id => 'svninfo', desc => 'svn working copy information', cmdline => 'svn info', workdir => '', logfile => 'minbld_svninfo.log', tee => 1, prompt => 0, run => 1, }, { id => 'svnup', desc => 'Run full svn update', cmdline => 'svn update', workdir => '', logfile => 'minbld_svnupdate.log', tee => 1, prompt => 0, run => 1, }, # ... { id => "bld", desc => "Build unit tests ${bldtype}", cmdline => qq{bldnt ${bldtype}dll UnitTests.sln}, workdir => '', logfile => "minbld_${bldtype}bldunit.log", tee => 0, prompt => 0, run => 1, }, { id => "findbld", desc => 'Call find_strs_in_file', fn => \&find_in_file, fnargs => [ "minbld_${bldtype}bldunit.log", '[1-9][0-9]* errors +' ], workdir => '', logfile => '', tee => 1, prompt => 0, run => 1, } # ... );

    Generally, I enjoy using property tables like this in Perl. I find them easy to understand, maintain and extend. Plus, a la Pike above, focusing on the data first usually makes the coding a snap.

    Basically, the program runs a specified series of "actions" (either commands or functions) in the order specified by the action table. In the normal case, all actions in the table are run. Command line arguments can further be added to specify which parts of the table you want to run. For convenience, I added a -D (dry run) option to simply print the action table, with indexes listed, and a -i option to allow a specific range of action table indices to be run. A number of further command line options were added over time as we needed them.

    Initially, I started with just commands (returning zero on success, non-zero on failure). Later "action functions" were added (again returning zero on success and non-zero on failure).

    As the table grew over time, it became tedious and error-prone to copy and paste table entries. For example, if there are four different directories to be built, rather than having four entries in the action table that are identical except for the directory name, I wrote a function that took a list of directories and returned an action table. None of this was planned, the script just evolved naturally over time.

    Now is time to take stock, hence this meditation.

    Coincidentally, around the same time as I wrote my little script, we inherited an elaborate testing framework that specified tests via XML files. To give you a feel for these, here is a short excerpt:

    <Test> <Node>Muss</Node> <Query>Execute some-command</Query> <Valid>True</Valid> <MinimumRows>1</MinimumRows> <TestColumn> <ColumnName>CommandResponse</ColumnName> <MatchesRegex row="0">THRESHOLD STARTED.*Taffy</MatchesRegex> </TestColumn> <TestColumn> <ColumnName>CommandExitCode</ColumnName> <Compare function="Equal" row="0">0</Compare> </TestColumn> </Test>

    Now, while I personally detest using XML for these sorts of files, I felt the intent was good, namely to clearly separate the code from the data, thus allowing non-programmers to add new tests.

    Seeing all that XML at first made me feel disgusted ... then uneasy because my action table was embedded in the script rather than more cleanly represented as data in a separate file.

    To allow my script to be used by other teams, and by non-programmers, I need to make it easier to specify different action tables without touching the code. So I seek your advice on how to proceed:

    • Encode the action table as an XML file.
    • Encode the action table as a YAML file.
    • Encode the action table as a JSON (JavaScript Object Notation) file.
    • Encode the action table as a "Perl Object Notation" file (and read/parse via string eval).
    • Turn the script and action table/s into Perl module/s.

    Another concern is that when you have thousands of actions, or thousands of tests, a lot of repetition creeps into the data files. Now dealing with repetition (DRY) in a programming language is trivial -- just use a function or a variable, say -- but what is the best way of dealing with unwanted repetition in XML, JSON and YAML data files? Suggestions welcome.

    References

Effective Automated Testing
2 direct replies — Read more / Contribute
by eyepopslikeamosquito
on Apr 18, 2015 at 04:50

    I'll be giving a talk at work about improving our test automation. Initial ideas are listed below. Feedback on talk content and general approach are welcome along with any automated testing anecdotes you'd like to share. Possible talk sections are listed below.

    Automation Benefits

    • Reduce cost.
    • Improve testing accuracy/efficiency.
    • Regression tests ensure new features don't break old ones. Essential for continuous delivery.
    • Automation is essential for tests that cannot be done manually: performance, reliability, stress/load testing, for example.
    • Psychological. More challenging/rewarding. Less tedious. Robots never get tired or bored.

    Automation Drawbacks

    • Opportunity cost of not finding bugs had you done more manual testing.
    • Automated test suite needs ongoing maintenance. So test code should be well-designed and maintainable; that is, you should avoid the common pitfall of "oh, it's only test code, so I'll just quickly cut n paste this code".
    • Cost of investigating spurious failures. It is wasteful to spend hours investigating a test failure only to find out the code is fine, the tests are fine, it's just that someone kicked out a cable. This has been a chronic nuisance for us, so ideas are especially welcome on techniques that reduce the cost of investigating test failures.
    • May give a false sense of security.
    • Still need manual testing. Humans notice flickering screens and a white form on a white background.

    When and Where Should You Automate?

    • Testing is essentially an economic activity. There are an infinite number of tests you could write. You test until you cannot afford to test any more. Look for value for money in your automated tests.
    • Tests have a finite lifetime. The longer the lifetime, the better the value.
    • The more bugs a test finds, the better the value.
    • Stable interfaces provide better value because it is cheaper to maintain the tests. Testing a stable API is cheaper than testing an unstable user interface, for instance.
    • Automated tests give great value when porting to new platforms.
    • Writing a test for customer bugs is good because it helps focus your testing effort around things that cost you real money and may further reduce future support call costs.

    Adding New Tests

    • Add new tests whenever you find a bug.
    • Around code hot spots and areas known to be complex, fragile or risky.
    • Where you fear a bug. A test that never finds a bug is poor value.
    • Customer focus. Add new tests based on what is important to the customer. For example, if your new release is correct but requires the customer to upgrade the hardware of 1000 nodes, they will not be happy.
    • Documentation-driven tests. Go through the user manual and write a test for each example given there.
    • Add tests (and refactor code if appropriate) whenever you add a new feature.
    • Boundary conditions.
    • Stress tests.
    • Big ones, but not too big. A test that takes too long to run is a barrier to running it often.
    • Tools. Code coverage tools tell you which sections of the code have not been tested. Other tools, such as static (e.g. lint) and dynamic (e.g. valgrind) code analyzers, are also useful.

    Test Infrastructure and Tools

    • Single step, automated build and test. Aim for continuous integration.
    • Clear and timely build/test reporting is essential.
    • Quarantine flaky failing tests quickly; run separately until solid, then return to main build. No broken windows.
    • Make it easy to find and categorize tests. Use test metadata.
    • Integrate automated tests with revision control, bug tracking, and other systems, as required.
    • Divide test suite into components that can be run separately and in parallel. Quick test turnaround time is crucial.

    Design for Testability

    • It is much easier/cheaper to write automated tests for systems that were designed with testability in mind in the first place.
    • Interfaces Matter. Make them: consistent, easy to use correctly, hard to use incorrectly, easy to read/maintain/extend, clearly documented, appropriate to audience, testable in isolation.
    • Dependency Injection is perhaps the most important design pattern in making code easier to test.
    • Mock Objects are also frequently useful and are broader than just code. For example, I've written a number of mock servers in Perl (e.g. a mock SMTP server) so as to easily simulate errors, delays, and so on.
    • Consider ease of support and diagnosing test failures during design.

    Test Driven Development (TDD)

    • Improved interfaces and design. Especially beneficial when writing new code. Writing a test first forces you to focus on interface. Hard to test code is often hard to use. Simpler interfaces are easier to test. Functions that are encapsulated and easy to test are easy to reuse. Components that are easy to mock are usually more flexible/extensible. Testing components in isolation ensures they can be understood in isolation and promotes low coupling/high cohesion.
    • Easier Maintenance. Regression tests are a safety net when making bug fixes. No tested component can break accidentally. No fixed bugs can recur. Essential when refactoring.
    • Improved Technical Documentation. Well-written tests are a precise, up-to-date form of technical documentation.
    • Debugging. Spend less time in crack-pipe debugging sessions.
    • Automation. Easy to test code is easy to script.
    • Improved Reliability and Security. How does the code handle bad input?
    • Easier to verify the component with memory checking and other tools (e.g. valgrind).
    • Improved Estimation. You've finished when all your tests pass. Your true rate of progress is more visible to others.
    • Improved Bug Reports. When a bug comes in, write a new test for it and refer to the test from the bug report.
    • Reduce time spent in System Testing.
    • Improved test coverage. If tests aren't written early, they tend never to get written. Without the discipline of TDD, developers tend to move on to the next task before completing the tests for the current one.
    • Psychological. Instant and positive feedback; especially important during long development projects.

    References

Log In?
Username:
Password:

What's my password?
Create A New User
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others cooling their heels in the Monastery: (6)
As of 2015-04-21 10:16 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    Who makes your decisions?







    Results (391 votes), past polls