Beefy Boxes and Bandwidth Generously Provided by pair Networks
The stupid question is the question not asked
 
PerlMonks  

The Monastery Gates

( #131=superdoc: print w/ replies, xml ) Need Help??

Donations gladly accepted

If you're new here please read PerlMonks FAQ
and Create a new user.

New Questions
Creating Perl5 plugin for Intellij IDEA
1 direct reply — Read more / Contribute
by hurricup
on Apr 19, 2015 at 14:41

    Recently i've tried different IDEs for perl5 and this experience was... sad...

    Before that day i've developed for a long-long time using NP++, no kidding. I'm windows user so no vim or smth.

    Padre - seems really cool idea, but unstable and looks abandoned.

    Currently working with Komodo, but it's far from perfect, got bugs and even than - costs money.

    From my own experience, the most powerful and convenient IDEs are Microsoft VS and Intellij IDEA, so I decided to try to write plugin for IDEA.

    First and main problem with perl IDE (as I can see) is too free syntax. No BNF, no nothing. You can't use some generators to make lexer and parser for you. They may help, but only help.

    I've tried to dig into perl sources and port toke.c into Java. Bad idea. Too much legacy code, too much little things that i don't understand. And porting of every function adds few more functions to port. And some of them requires very deep understanding of what is going on inside it. So i've dropped the idea and started from scratch.

    Currently, i'm building lexer using JFlex and i think it's going pretty well. Some coloring already works.

    Now i need to improve lexer, start to write a parser and implement IDEA features. There is a lot of work to do, so if anyone interested in such plugin and would like to participate, you are welcome: https://github.com/hurricup/Perl5-IDEA

    You may also contact me via skype: hurricup or my gmail email: hurricup.

    I belive that together we can do it.

    P.S. To participate you should have some knowlege of perl and Java.

    P.S.S. If you have no experience with Java but pretty familiar with Perl's insides - your consultations would be appreciated.

Perl koan #2300 (Perl in the browser)
4 direct replies — Read more / Contribute
by rje
on Apr 17, 2015 at 13:46

    Perl koan n. A puzzling, often paradoxical statement or suggestion, used by Perl hackers as an aid to meditation and a means of gaining spiritual awakening or something.


    For the moment, don't think about the obstacles.

    Imagine you could plug in Perl (perhaps defaulted to strict mode and built-in with Mo(o|u)(se)?) onto your browser when JavaScript gets too painful.

    What possible benefits might that give? Assuming first that JavaScript is difficult for enterprise-scale apps (hence the existence of Dart, and frameworks like AngularJS), it seems that Perl would fill the part -- and has been stable for years, and has a wide user base.

    Obstacles aside, can you see the utility and beauty of Perl5 on every browser - not just Opera?

Is it possible to localize the stat/lstat cache?
5 direct replies — Read more / Contribute
by bounsy
on Apr 17, 2015 at 10:08

    I have a function that gets called a lot (a wanted function for File::Find going against large numbers of files). To avoid constantly hitting the disk, I do one stat/lstat and then use -X _ repeatedly after that (using the cached results of the last stat/lstat).

    In some cases, I need to call other functions that need to be able to use stat/lstat, which will overwrite the cached results in _. The actual calls to these other functions are uncommon in frequency (exception handling, essentially), but there are many places in the main function that might need to call them.

    Ideally, I would like to be able to localize the cached results of the stat/lstat call in some way (in the called functions). Is there a way to do this? (Note that local _; doesn't compile and local *_; doesn't work.) Is there a better and/or more generic approach?

    If I can't localize in any way, one approach I'm considering is caching the results I need in a hash. For example:

    #NOTE: Depending on certain conditions, I need either stat or lstat. if (...) { stat($Filename) } else { lstat($Filename) } #Current code uses this format: # if (-X _) {...} #Possible new code (including only the tests I need to use): my %Stat = ( r => (-r _), w => (-w _), x => (-x _), s => (-s _), ... ); if ($Stat{r}) {...}

    Thoughts?

Populating a hash-ref from multiple arrays with slice?
4 direct replies — Read more / Contribute
by Anonymous Monk
on Apr 16, 2015 at 13:33
    Hi perl monks. I am stumped with this one.

    I have several arrays that I want to slice into a single hash ref. Any tips? I read a bit about creating a hash slice from 2 arrays, but can't figure out the syntax for creating a hash ref slice from multiple arrays.

    For example (excuse grammatical errors -- this is more pseudo code than anything):

    $hash_ref; @array1 = "some_unique_key1 some_unique_key2 some_unique_key3"; @array2 = "meta_data_1 metadata_2 metadat_3"; @array3 = "submitted_date1 submitted_date2 submitted_date3";

    The hash ref would have a structure like:

    $some_uniquekey1 --> 'Metadata' --> $meta_data1 = '1' --> 'Submitted Date' --> $subitted_date1 = '1' $some_uniquekey2 ... etc $some_uniquekey3 ... etc

    Basically the arrays value indices all correspond to each other -- I just want to merge them all into an accessible table.

    Any tips or slaps up the side of my head appreciated!

Approximate logarithmic statistical module: new() params naming
1 direct reply — Read more / Contribute
by Dallaylaen
on Apr 16, 2015 at 02:37

    Suppose we have a module for approximate memory-efficient statistical analysis which stores data in a set of logarithmic bins. However, around zero, depending on the data, it may be suitable to switch to linear interpolation (as in "no measurement is absolutely precise, why use so many bins").

    For now, the proposed new() interface (has not been released to CPAN) is as follows:

    • base - base of logarithmic intervals, i.e. upper_bound/lower_bound;
    • precision - width of linear intervals around zero;
    • zero_thresh - optional threshold at which to switch to linear, normally it's calculated dynamically (we want to switch where the linear and logarithmic intervals are of the same width).

    I'm ok with my data model, but the parameter names seem a bit weird.

    I would like to rename them to relative_precision, absolute_precision, and linear_threshold respectively. Does that look clear enough?

    I was also thinking of absolute/relative error, but error is really variable and no more than half the precision. I think this could cause additional WTF.

    Are there any better ideas?

    The module in question is Statistics::Descriptive::LogScale. Here's the previous discussion.

sudo ignoring string entry after first space encountered
3 direct replies — Read more / Contribute
by perl197
on Apr 14, 2015 at 17:12

    Hi all. I have a production environment that requires the use of sudo to execute perl scripts by multiple users. (emulating a production account, not root) A perl script accepts a where clause as an input parameter. I.e, sudo myperlscript.pl -cs CONNECTIONSTRING -w "fieldvalue in ('111','222')" In myperlscript.pl, w is defined as string ("w=s" => \$where_condition) However if printing $where_condition, the response is simply fieldvalue with the rest of the string stripped off after the first space. This causes an error when executing the combined sql statement. My query to the Monks community; Is there an escape sequence or quote combination you can recommend to allow the entire string to be passed to the variable? I can enter the where condition to and read from a file, but would like to solve this riddle with a less cumbersome work around. Any suggestions are welcome.

Undefined value as an ARRAY reference, JSON data
2 direct replies — Read more / Contribute
by johnfl68
on Apr 14, 2015 at 13:35

    Hello, I am reading data from JSON, from an external API source.

    On some occasions, one of the Arrays in the JSON is not present, because there is no applicable data. This causes the Undefined value as an ARRAY reference error.

    I've been looking around for a while for a way to get some working solution out of the error, so I know to just not process that data. But every post I have read makes the assumption that the data is flawed because the array I am looking for should always be there, which isn't the case.

    It's like a catch 22, I need to read it to check and see if it is defined, but I can't read it because it is undefined.

    Can anyone please point me in the right direction to handle this error? I am sure it is something simple that is just escaping me.

    Thank you!

session with mysql database.
2 direct replies — Read more / Contribute
by newbie430
on Apr 14, 2015 at 04:46

    I am trying to create a session for my web application using Dancer2. I currently using mysql database. I searched alot of reference on the internet because I'm new to Perl it seem the references are too complicated. Please guide me. Here is my .pm code.

    post '/' => sub { my $username = $q->param("username"); my $password = $q->param("password"); my $loginstatement = 'SELECT * FROM account WHERE username=? a +nd password=?'; my $sth = $dbh->prepare($loginstatement) or die $dbh->errstr; $sth->execute(params->{'username'}, params->{'password'}) or d +ie $sth->errstr; my (@userID) = $sth->fetchrow_array; if ($userID[0] != 0 && $userID[11] ==1 ){ my $sessionstatement = "SELECT * FROM sessions"; my $sth2 = $dbh->prepare($sessionstatement) or die $db +h ->errstr; my (@sessID)= $sth2->fetchrow_array; if($userID[3] == $sessID[1]) { session 'logged_in' => true; }
fork child process
3 direct replies — Read more / Contribute
by Anonymous Monk
on Apr 14, 2015 at 04:15

    I need to fork child process for each array element. I'm using below code :

    #!/usr/bin/env perl my $pid; my @a = (1,2,3); for my $i (@a) { print "i -> $i \n"; $pid = fork(); if ( $pid ) { # parent print " child pid : $pid \n"; } elsif ( $pid == 0) { #child #print "child working on $i "; } } while((wait()) > 0) {}; exit

    but executing the code results in child processes more than the no. of elements in array. I don't know what's the problem. Output look like : i -> 1 child pid : 12629 i -> 2 i -> 2 child pid : 12631 i -> 3 child pid : 12630 i -> 3 child pid : 12632 i -> 3 child pid : 12633 i -> 3 child pid : 12634 child pid : 12635

Fastest byteswap (little endian to big endian (eg. 34127856 -> 12345678)
4 direct replies — Read more / Contribute
by james28909
on Apr 14, 2015 at 00:40
    I have a few lines of code I use for byte reversal of some 16mb files. I have got it to convert this byte order in .6 - .8 secs. I was wondering if there is a more faster way than using these two methods I have below. Please feel free to crush my record of 0.68640184402465.

    Here is a file to test with ofcourse: Test File

    use strict; use warnings; use Time::HiRes qw( time ); my $start = time(); open (my $file, '<', $ARGV[0]) or die 'cannot open $file: $!'; binmode($file); open (my $reversedFile, '+>', "$ARGV[0].swap"); binmode($reversedFile); my($data, $n); while (($n = read $file, $data, 4096) != 0) { print $reversedFile pack("v*", unpack("n*", $data)); } my $end = time(); my $runtime = sprintf("%.16s", $end - $start); print $runtime;
    And another method which is a little slower it seems:
    use strict; use warnings; use Time::HiRes qw( time ); my $start = time(); my $input_file = $ARGV[0]; my $data = do { local $/ = undef; open (my $fh, "<", $input_file) or die "could not open $input_file +: $!"; binmode($fh); <$fh>; }; my $reversed_data = pack( "v*", unpack( "n*", $data ) ); open my $output, '>', "bkpps3.swap.bin"; binmode($output); print $output $reversed_data; my $end = time(); my $runtime = sprintf( "%.16s", $end - $start ); print $runtime;
    Like i said, I really am not to sure how to make it faster than it is. Any input is appreciated :)
Recognizing numbers and creating links
5 direct replies — Read more / Contribute
by htmanning
on Apr 13, 2015 at 16:32
    I posted something about this the other day and got some good suggestions. Unfortunately, some were above my pay grade. I am using the following to recognize 3-digit and 4-digit numbers in a text field. It works, but with a few issues. For one thing it tags phone numbers. No big deal, but I wish I could recognize a 10-digit or 7-digit number with a dash and not link to it. The main question I have now is, if the same number appears in the text field, it screws up the first link and doesn't link to the second. Obviously, the code I have here has a limitation. I need a suggestion for a way around this.
    my @numbers4 = $text =~ /\b \d{4} \b/gx; foreach $unit4 (@numbers4) { $text =~ s/$unit4/\<a href=\"unit=$unit4\"\>\<b\>$unit4\<\/b\>\<\/a\>/ +i; } # look for 3 digit numbers and make link to Resident Info card. my @numbers3 = $text =~ /\b \d{3} \b/gx; foreach $unit3 (@numbers3) { $text =~ s/$unit3/\<a href=\"?unit=$unit3\"\><b\>$unit3<\/b\>\<\/a\>/i +; }
    How can I work around the same 3 or 4 digit number appearing multiple times in the $text field? Thanks,
Using regular expressions with arrays
5 direct replies — Read more / Contribute
by andybshaker
on Apr 13, 2015 at 12:09

    Hi, please bear with me as I'm new to Perl. I have two arrays: one is a list of full gene names (GI), and one is a list of just their accession numbers (Accession). It looks like:

    my @GI = ("\Qgi|Q384722390|emb|WP_938420210.1|Gene name\E","\Qgi|34254 +6780|emb|WP_934203412.1|Gene name\E"); my @Accession = ("WP_938420210.1","WP_934203412.1");

    That is only an abbreviated example. In the real program, the GI list is much longer because it contains all the genes, and the Accession array only contains the Accession numbers of the ones I'm interested in. The Accession numbers are part of the GI full name, so I thought I could use a regular expression to go through each element of the arrays and find matches for the 469 accession numbers, like this:

    my $X = 0; my $Y = 0; while($X <= 468){ if(/$Accession[$X]/ =~ $GI[$Y]){ print $GI[$Y]; $X = $X + 1; $Y = 0; } else{$Y++}; }

    However, when I do this, I get the error "Use of uninitialized value in pattern match (m//) and also use of uninitialized value within @GI in regexp compilation. Does anyone know what I am doing wrong? Thank you!

New Meditations
Data-driven Programming: fun with Perl, JSON, YAML, XML...
6 direct replies — Read more / Contribute
by eyepopslikeamosquito
on Apr 19, 2015 at 04:41

    The programmer at wit's end for lack of space can often do best by disentangling himself from his code, rearing back, and contemplating his data. Representation is the essence of programming.

    -- from The Mythical Man Month by Fred Brooks

    Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.

    -- Rob Pike

    As part of our build and test automation, I recently wrote a short Perl script for our team to automatically build and test specified projects before checkin.

    Lamentably, another team had already written a truly horrible Windows .BAT script to do just this. Since I find it intolerable to maintain code in a language lacking subroutines, local variables, and data structures, I naturally started by re-writing it in Perl.

    Focusing on data rather than code, it seemed natural to start by defining a table of properties describing what I wanted the script to do. Here is a cut-down version of the data structure I came up with:

    # Action functions (return zero on success). sub find_in_file { my $fname = shift; my $str = shift; my $nfound = 0; open( my $fh, '<', $fname ) or die "error: open '$fname': $!"; while ( my $line = <$fh> ) { if ( $line =~ /$str/ ) { print $line; ++$nfound; } } close $fh; return $nfound; } # ... # -------------------------------------------------------------------- +---- # Globals (mostly set by command line arguments) my $bldtype = 'rel'; # -------------------------------------------------------------------- +---- # The action table @action_tab below defines the commands/functions # to be run by this program and the order of running them. my @action_tab = ( { id => 'svninfo', desc => 'svn working copy information', cmdline => 'svn info', workdir => '', logfile => 'minbld_svninfo.log', tee => 1, prompt => 0, run => 1, }, { id => 'svnup', desc => 'Run full svn update', cmdline => 'svn update', workdir => '', logfile => 'minbld_svnupdate.log', tee => 1, prompt => 0, run => 1, }, # ... { id => "bld", desc => "Build unit tests ${bldtype}", cmdline => qq{bldnt ${bldtype}dll UnitTests.sln}, workdir => '', logfile => "minbld_${bldtype}bldunit.log", tee => 0, prompt => 0, run => 1, }, { id => "findbld", desc => 'Call find_strs_in_file', fn => \&find_in_file, fnargs => [ "minbld_${bldtype}bldunit.log", '[1-9][0-9]* errors +' ], workdir => '', logfile => '', tee => 1, prompt => 0, run => 1, } # ... );

    Generally, I enjoy using property tables like this in Perl. I find them easy to understand, maintain and extend. Plus, a la Pike above, focusing on the data first usually makes the coding a snap.

    Basically, the program runs a specified series of "actions" (either commands or functions) in the order specified by the action table. In the normal case, all actions in the table are run. Command line arguments can further be added to specify which parts of the table you want to run. For convenience, I added a -D (dry run) option to simply print the action table, with indexes listed, and a -i option to allow a specific range of action table indices to be run. A number of further command line options were added over time as we needed them.

    Initially, I started with just commands (returning zero on success, non-zero on failure). Later "action functions" were added (again returning zero on success and non-zero on failure).

    As the table grew over time, it became tedious and error-prone to copy and paste table entries. For example, if there are four different directories to be built, rather than having four entries in the action table that are identical except for the directory name, I wrote a function that took a list of directories and returned an action table. None of this was planned, the script just evolved naturally over time.

    Now is time to take stock, hence this meditation.

    Coincidentally, around the same time as I wrote my little script, we inherited an elaborate testing framework that specified tests via XML files. To give you a feel for these, here is a short excerpt:

    <Test> <Node>Muss</Node> <Query>Execute some-command</Query> <Valid>True</Valid> <MinimumRows>1</MinimumRows> <TestColumn> <ColumnName>CommandResponse</ColumnName> <MatchesRegex row="0">THRESHOLD STARTED.*Taffy</MatchesRegex> </TestColumn> <TestColumn> <ColumnName>CommandExitCode</ColumnName> <Compare function="Equal" row="0">0</Compare> </TestColumn> </Test>

    Now, while I personally detest using XML for these sorts of files, I felt the intent was good, namely to clearly separate the code from the data, thus allowing non-programmers to add new tests.

    Seeing all that XML at first made me feel disgusted ... then uneasy because my action table was embedded in the script rather than more cleanly represented as data in a separate file.

    To allow my script to be used by other teams, and by non-programmers, I need to make it easier to specify different action tables without touching the code. So I seek your advice on how to proceed:

    • Encode the action table as an XML file.
    • Encode the action table as a YAML file.
    • Encode the action table as a JSON (JavaScript Object Notation) file.
    • Encode the action table as a "Perl Object Notation" file (and read/parse via string eval).
    • Turn the script and action table/s into Perl module/s.

    Another concern is that when you have thousands of actions, or thousands of tests, a lot of repetition creeps into the data files. Now dealing with repetition (DRY) in a programming language is trivial -- just use a function or a variable, say -- but what is the best way of dealing with unwanted repetition in XML, JSON and YAML data files? Suggestions welcome.

    References

Effective Automated Testing
No replies — Read more | Post response
by eyepopslikeamosquito
on Apr 18, 2015 at 04:50

    I'll be giving a talk at work about improving our test automation. Initial ideas are listed below. Feedback on talk content and general approach are welcome along with any automated testing anecdotes you'd like to share. Possible talk sections are listed below.

    Automation Benefits

    • Reduce cost.
    • Improve testing accuracy/efficiency.
    • Regression tests ensure new features don't break old ones. Essential for continuous delivery.
    • Automation is essential for tests that cannot be done manually: performance, reliability, stress/load testing, for example.
    • Psychological. More challenging/rewarding. Less tedious. Robots never get tired or bored.

    Automation Drawbacks

    • Opportunity cost of not finding bugs had you done more manual testing.
    • Automated test suite needs ongoing maintenance. So test code should be well-designed and maintainable; that is, you should avoid the common pitfall of "oh, it's only test code, so I'll just quickly cut n paste this code".
    • Cost of investigating spurious failures. It is wasteful to spend hours investigating a test failure only to find out the code is fine, the tests are fine, it's just that someone kicked out a cable. This has been a chronic nuisance for us, so ideas are especially welcome on techniques that reduce the cost of investigating test failures.
    • May give a false sense of security.
    • Still need manual testing. Humans notice flickering screens and a white form on a white background.

    When and Where Should You Automate?

    • Testing is essentially an economic activity. There are an infinite number of tests you could write. You test until you cannot afford to test any more. Look for value for money in your automated tests.
    • Tests have a finite lifetime. The longer the lifetime, the better the value.
    • The more bugs a test finds, the better the value.
    • Stable interfaces provide better value because it is cheaper to maintain the tests. Testing a stable API is cheaper than testing an unstable user interface, for instance.
    • Automated tests give great value when porting to new platforms.
    • Writing a test for customer bugs is good because it helps focus your testing effort around things that cost you real money and may further reduce future support call costs.

    Adding New Tests

    • Add new tests whenever you find a bug.
    • Around code hot spots and areas known to be complex, fragile or risky.
    • Where you fear a bug. A test that never finds a bug is poor value.
    • Customer focus. Add new tests based on what is important to the customer. For example, if your new release is correct but requires the customer to upgrade the hardware of 1000 nodes, they will not be happy.
    • Documentation-driven tests. Go through the user manual and write a test for each example given there.
    • Add tests (and refactor code if appropriate) whenever you add a new feature.
    • Boundary conditions.
    • Stress tests.
    • Big ones, but not too big. A test that takes too long to run is a barrier to running it often.
    • Tools. Code coverage tools tell you which sections of the code have not been tested. Other tools, such as static (e.g. lint) and dynamic (e.g. valgrind) code analyzers, are also useful.

    Test Infrastructure and Tools

    • Single step, automated build and test. Aim for continuous integration.
    • Clear and timely build/test reporting is essential.
    • Quarantine flaky failing tests quickly; run separately until solid, then return to main build. No broken windows.
    • Make it easy to find and categorize tests. Use test metadata.
    • Integrate automated tests with revision control, bug tracking, and other systems, as required.
    • Divide test suite into components that can be run separately and in parallel. Quick test turnaround time is crucial.

    Design for Testability

    • It is much easier/cheaper to write automated tests for systems that were designed with testability in mind in the first place.
    • Interfaces Matter. Make them: consistent, easy to use correctly, hard to use incorrectly, easy to read/maintain/extend, clearly documented, appropriate to audience, testable in isolation.
    • Dependency Injection is perhaps the most important design pattern in making code easier to test.
    • Mock Objects are also frequently useful and are broader than just code. For example, I've written a number of mock servers in Perl (e.g. a mock SMTP server) so as to easily simulate errors, delays, and so on.
    • Consider ease of support and diagnosing test failures during design.

    Test Driven Development (TDD)

    • Improved interfaces and design. Especially beneficial when writing new code. Writing a test first forces you to focus on interface. Hard to test code is often hard to use. Simpler interfaces are easier to test. Functions that are encapsulated and easy to test are easy to reuse. Components that are easy to mock are usually more flexible/extensible. Testing components in isolation ensures they can be understood in isolation and promotes low coupling/high cohesion.
    • Easier Maintenance. Regression tests are a safety net when making bug fixes. No tested component can break accidentally. No fixed bugs can recur. Essential when refactoring.
    • Improved Technical Documentation. Well-written tests are a precise, up-to-date form of technical documentation.
    • Debugging. Spend less time in crack-pipe debugging sessions.
    • Automation. Easy to test code is easy to script.
    • Improved Reliability and Security. How does the code handle bad input?
    • Easier to verify the component with memory checking and other tools (e.g. valgrind).
    • Improved Estimation. You've finished when all your tests pass. Your true rate of progress is more visible to others.
    • Improved Bug Reports. When a bug comes in, write a new test for it and refer to the test from the bug report.
    • Reduce time spent in System Testing.
    • Improved test coverage. If tests aren't written early, they tend never to get written. Without the discipline of TDD, developers tend to move on to the next task before completing the tests for the current one.
    • Psychological. Instant and positive feedback; especially important during long development projects.

    References

Log In?
Username:
Password:

What's my password?
Create A New User
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others pondering the Monastery: (4)
As of 2015-04-20 00:07 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    Who makes your decisions?







    Results (363 votes), past polls