Donations gladly accepted
If you're new here please read PerlMonks FAQ and Create a new user.
|
New Questions
|
Getting constructor caller in Mo/Moo/Moose BUILD/BUILDARGS
1 direct reply — Read more / Contribute
|
by perlancar
on Jul 25, 2015 at 07:58
|
|
|
What is the proper way to get the caller to our object creation (the object's client code) inside Mo/Moo/Moose's BUILD or BUILDARGS? I'm okay with getting a subclass.
From a quick glance of the Moo and Moose codebase, it doesn't seem like Moo/Moose provides a utility routine for this. A quick search on CPAN also doesn't yield anything yet.
Example:
package Class1;
use Moo;
has attr1 => (is => 'rw');
sub BUILD {
no strict 'refs';
my $self = shift;
# XXX set default for attr1 depending on the caller
unless (defined $self->attr1) {
$self->attr1(${"$object_caller_package\::FOO"});
}
}
package C2;
use Moo;
extends 'C1';
package main;
our $FOO = 42;
say C2->new->attr1; # prints 42
In principle it should be easy enough to loop over the caller stack and use the first non-Moo* stuff.
|
RFC: Name and/or API for module ("HTML::RewriteURLs")
4 direct replies — Read more / Contribute
|
by Corion
on Jul 25, 2015 at 05:45
|
|
|
Once again, I have a module but no name. I come here in the hope of finding a good name that helps others find this module and put it to good use.
Let me first describe what the module does:
The module exports two functions, rewrite_html and rewrite_css. These functions rewrite all things that look like URLs to be relative
to a given base URL. This is of interest when you're converting scraped HTML to self-contained static files. The usage is:
use HTML::RewriteURLs;
my $html = <<HTML;
<html>
<head>
<link rel="stylesheet" src="http://localhost:5000/css/site.css" />
</head>
<body>
<a href="http://perlmonks.org">Go to Perlmonks.org</a>
<a href="http://localhost:5000">Go to home page/a>
</body>
</html>
HTML
my $local_html = rewrite_html( "http://localhost:5000/about", $html );
print $local_html;
__END__
<html>
<head>
<link rel="stylesheet" src="../css/site.css" />
</head>
<body>
<a href="http://perlmonks.org">Go to Perlmonks.org</a>
<a href="..">Go to home page/a>
</body>
</html>
The current name for the module is HTML::RewriteURLs, and this name is bad because the module does not allow or support arbitrary URL rewriting but only rewrites URLs relative to a given URL. The functions are also badly named, because rewrite_html doesn't rewrite the HTML but it makes URLs relative to a given base. And the HTML::RewriteURLs name is also bad/not comprehensive because the module also supports rewriting CSS.
I'm willing to stay with the HTML:: namespace because nobody really cares about CSS before caring about HTML.
I think a better name could be HTML::RelativeURLs, but I'm not sure if other people have the same association. The functions could be renamed to relative_urls_html() and relative_urls_css().
Another name could be URL::Relative or something like that, but that shifts the focus away from the documents I'm mistreating to the URLs. I'm not sure what people look for first.
Below is the ugly, ugly regular expression I use for munging the HTML. I know and accept that this regex won't handle all edge cases, but seeing that there is no HTML rewriting module on CPAN at all, I think I'll first release a simpleminded version of what I need before I cater to the edge cases. I'm not fond of using HTML::TreeParser because it will rewrite the document structure of the scraped pages and the only change I want is the change in the URL attributes.
=head2 C<< rewrite_html >>
Rewrites all HTML links to be relative to the given URL. This
only rewrites things that look like C<< src= >> and C<< href= >> attri
+butes.
Unquoted attributes will not be rewritten. This should be fixed.
=cut
sub rewrite_html {
my($url, $html)= @_;
$url = URI::URL->new( $url );
#croak "Can only rewrite relative to an absolute URL!"
# unless $url->is_absolute;
# Rewrite relative to absolute
rewrite_html_inplace( $url, $html );
$html
}
sub rewrite_html_inplace {
my $url = shift;
$url = URI::URL->new( $url );
#croak "Can only rewrite relative to an absolute URL!"
# unless $url->is_absolute;
# Rewrite relative to absolute
$_[0] =~ s!((?:src|href)\s*=\s*(["']))(.+?)\2!$1 . relative_url(UR
+I::URL->new( $url ),"$3") . $2!ge;
}
|
Convert GMT timestamp to EST/EDT
2 direct replies — Read more / Contribute
|
by gtk
on Jul 25, 2015 at 01:45
|
|
|
Is there a simple way to convert GMT timestamp to EST/EDT(any timezone) timestamp without using new additional Perl Libraries?
Sample of my file(Time in GMT and I wanted to view the time in EDT or EST):
20150619-17:30:43.616, 26
20150619-17:30:33.442, 23
20150619-17:30:40.376, 26
20150619-17:30:38.863, 26
20150619-17:30:56.936, 26
20150619-17:30:34.952, 24
20150619-17:30:45.889, 26
20150619-17:30:53.940, 23
20150619-17:30:51.154, 25
20150619-17:30:48.699, 26
|
ambiguous regex match
2 direct replies — Read more / Contribute
|
by Hosen1989
on Jul 24, 2015 at 12:26
|
|
|
Dear ALL,
I was doing some parsing for log file and come to this bug (i think), I add the next simple code to show you what I face:
use strict;
use warnings;
my $data = 'blabla;tag1=12345;blabla;';
# my $data = 'blabla;tag1=12345;blabla;tag2=99999';
# get tag1 value
$data =~ m/tag1=(\d+)/g;
my $tag1 = $1;
# get tag2 value
$data =~ m/tag2=(\d+)/g;
my $tag2 = $1;
print "tag1 = $tag1\n";
print "tag2 = $tag2\n";
The output:
tag1 = 12345
tag2 = 12345
Us you can see the are only tag1 value in $data, so should be no match and the second tag $tag2 should be undefined,
but what i got is $tag1 =$tag2!!!.
So can any monk here (and pretty please) explain to my what happen here.
BR
Hosen
|
Variable blasphemy
4 direct replies — Read more / Contribute
|
by SixTheCat
on Jul 24, 2015 at 10:40
|
|
|
Oh monks of the Holy Order of Perl. I bring grave news of blasphemy in my variables! I am new to perl (only a week in) so it's very likely a problem with me but...
I'm trying to write a simple script to open a csv file and get two columns which are then used to convert one name to the other. The problem is that while the line is read correctly from the file and seems to separate correctly using the split function, the variables themselves act wonky after. If I print (or say) both of the variables in the same string, if one variable is displayed first the string displays fine but if the other is output first, it doesn't show up. I don't see any possible hidden terminating characters in the CSV file that would cause this problem. Any ideas?
The csv file contains the following data:
rs6413438,CYP2C19_10
rs4986910,CYP2C19_20
The output looks something like this
--------- Converting Star Allele references to rs numbers ---------
Current input line is
Index 0 is rs6413438
Index 0 is rs6413438 is stored as rs6413438 <-- Correct display
Index 1 is CYP2C19_10
Index 1 is CYP2C19_10
is stored as CYP2C19_10 <--- WTF, where is the first variable?
Comparing CYP2C19_10
and CYP2C19_10
Comparing CYP2C19_10
and CYP2C19_12
Current input line is
Index 0 is rs4986910
Index 0 is rs4986910 is stored as rs4986910 <-- Correct display
Index 1 is CYP2C19_20
Index 1 is CYP2C19_20
is stored as CYP2C19_20 <--- WTF
Comparing CYP2C19_20
and CYP2C19_10
Comparing CYP2C19_20
and CYP2C19_12
-------------- Done converting Star Allele references -------------
#!perl
use strict;
use 5.010;
my $STARFile;
+ # File handle to reference file
my @Stars;
$Stars[0] = "CYP2C19_10";
+ # Mock array of values to cross reference
$Stars[1] = "CYP2C19_12";
+ #
if(@Stars==0){return;}
+ # If no Star Alleles were specified then no n
+eed to do this so return to the main body
if(! open $STARFile,"<","test.csv"){die "Reference file could not be f
+ound or could not be opened.";} # Open the Star reference file to
+prepare to convert information and store the file handle to $STARFile
+.
print "Converting specified Star Designations to SNPs...";
# The conversion table is opened so convert the Star name to rs number
+s and then store the rs numbers to the @SNPs array and the correspond
+ing Star name to the @Stars array at the same index.
my @SNPs;
my @StarsCon;
my $RefIndex;
+ # Holds the line in the reference table file
my $StarIndex;
+ # Holds the index of the @Stars Array that is
+ being checked
my $tmpSNPIndex;
+ # Holds the index in the @SNPs array that we ar
+e comparing
my $tmpStar;
+ # Holds the Star Allele name
my $tmpRS;
+ # Holds the SNP's rs number
my @tmpConv;
+ # Holds the split Star and rs numbers
say "\n--------- Converting Star Allele references to rs numbers -----
+----";
while (<$STARFile>){
+ # Input a line from the database and as long a
+s we haven't reached the end of the file
chomp;
+ # Remove the trailing newline
say "Current input line is @_";
@tmpConv = split ",",$_;
+ # Split the CSV line from the reference table s
+uch that @tmpConv[0] = Star name and @tmpConv[1] = rs number
$tmpStar = $tmpConv [1];
$tmpRS = $tmpConv[0];
say "Index 0 is $tmpConv[0]";
+ # Displays correctly
say "Index 0 is $tmpConv[0] is stored as $tmpRS";
+ # Displays correctly
say "Index 1 is $tmpConv[1]";
+ # Displays correctly
say "Index 1 is $tmpConv[1] is stored as $tmpStar";
+ # Displays INcorrectly
for($StarIndex=0;$StarIndex<@Stars;$StarIndex++){
say "Comparing $tmpStar and $Stars[$StarIndex]";
if($tmpStar eq $Stars[$StarIndex]){
+ # If the current line of the database file c
+ontains the Star Allele rs number then
$tmpSNPIndex = @SNPs;
+ # Get the number of entries in the @SNPs array
+.
say "1. $tmpRS was converted from $tmpStar";
say "2. $tmpStar was converted to $tmpRS";
say "3. $tmpStar was converted to $tmpRS";
say "4. $tmpRS was converted from $tmpStar";
push @StarsCon, $tmpStar;
+ # Add the Star allele name to the @StarsCon ar
+ray
push @SNPs, $tmpRS;
+ # Add the new rs number to the @SNPs array
if(@Stars>0){
+ # If we have more than one SNP then
splice @Stars,$StarIndex,1;
+ # and @Stars array
}else{
+ # Otherwise Pop off the last one
pop @Stars;
+ #
}
last;
+ # Exit the for loop
}
}
if(! @Stars>0){last;}
+ # If that was the last entry then stop searchi
+ng
}
say "-------------- Done converting Star Allele references -----------
+--";
if(@Stars>0){
+ # If any SNPs have not been found then
say "\n"."Conversions not completed: @Stars.";
+ # Inform the user which ones were not found
}else{
+ # Otherwise
say "\n"."All conversions successful.";
+ # Inform the user that all were found
}
close $STARFile;
+ # Close the reference file
print "Done!\n";
|
Copying an ascii text file, replicating end of line terminator (windows or unix)
4 direct replies — Read more / Contribute
|
by luckycat
on Jul 24, 2015 at 04:23
|
|
|
I've written a Perl script which runs in linux that copies an ascii text file to a new file, line terminators could be Windows style (\r\n) or Unix (\n). On certain lines which match a string I'm looking for I will process them before outputting that line back to the new file.
On lines I don't process, doing a simple print OUTFILE $_; works great as it'll just replicate whatever line terminator the line uses and write that out to the output file.
But for the lines I'm processing, I need to write back my processed line back out to it so I need to add in the line terminator manually. I'm doing this check right now:
my $endofline = ( /\r\n$/ ) ? "\r\n" : "\n";
Then here's the code for the processed line I'd write out:
print OUTFILE "$processed_string","$endofline";
My script works but I'm wondering if there's a better, cleaner way to do this?
Currently I'm doing the end of line check within the while loop that processes each line of the input file so every single line is checked which is probably not efficient. I wanted to guard against the case where you could possibly have mixed windows and unix end of line terminators in the same file. However if that's extremely rare I guess I could remove the check from within the while loop that processes each line of the input file. If I do that, how would I get the type of line terminator the file uses so I know what to use in the print statement later?
Basically is there a better way to do what I'm trying to do.
Thanks for any tips.
|
Log::Log4perl and Net::SMTP
No replies — Read more | Post response
|
by tangent
on Jul 22, 2015 at 23:38
|
|
|
Venerable Monks,
I started using Log4perl some time ago and it is the bees knees, though I only access its most basic functionality. I use it in a cron job that goes something like this:
use Log::Log4perl qw(:easy);
Log::Log4perl->easy_init( { level => $INFO, file => $logfile } );
INFO "Start process...";
# lots of stuff here
my $sender = Email::Stuffer->new;
set_transport($sender); # set up SMTP
# construct email here, then
my $sent = $sender->send;
if ( $sent ) {
INFO "Email sent...";
}
# ...
The problem is that this spews out a large amount of log messages from Net::SMTP (including the entire contents of the email) though when I look at the source of that module I can't find any references to Log4perl or any of its directives. I am guessing that one of the Email::Stuffer modules might be causing it. At the moment I use this workaround:
Log::Log4perl->easy_init( { level => $ERROR, file => $logfile } );
my $sender = Email::Stuffer->new;
# etc
my $sent = $sender->send;
Log::Log4perl->easy_init( { level => $INFO, file => $logfile } );
This suppresses the Net::SMTP messages but I am wondering why my script is controlling this output, and if there is another way to deal with it.
Update: As is often the case I think I have found the answer just after asking the question - when I set up the SMTP transport I have debug => 1 so maybe Log4perl is appropriating the debug messages. Will test later.
|
Uploading a devel version to PAUSE
3 direct replies — Read more / Contribute
|
by stevieb
on Jul 22, 2015 at 20:40
|
|
|
After 15 years of writing Perl, I've finally achieved something that I've wanted for a very long time... a module that allows you to inject code into subroutines, and add subs on the fly to files. The original purpose of this was to add a call to a stack trace method to each sub, so a long running program would keep track of its own information.
I'm days away from releasing candidate one of the upgraded module, and I'll be requesting of the Monks who have some time to spare to review the README for too much/too little info, and if they are so kind, to test out the code examples provided in the SYNOPSIS (and beyond).
What I'd like to do is see how it looks after the CPAN parser does its job.
My question is this: Can I upload a devel version to PAUSE (a devel version in CPAN is one that has a _NN designation at the end of the version info) without pissing all over my existing relatively stable version, and view it all the same, while current users will still pull stable in a cpan install Module::Name?.
I could try it, but I felt it better to ask than to put undue load on the CPAN servers.
-stevieb
For those curious, current code can be found in the 'engine' branch here: DES.
|
running Image::Magick with Windows Perl in 2015?
4 direct replies — Read more / Contribute
|
by pm395080566
on Jul 22, 2015 at 17:19
|
|
|
Hi all,
I have a bunch of servers running ActivePerl 5.14 and we make heavy use of PerlMagick aka Image::Magick.
I need to upgrade to a newer Perl so that LWP can speak the newer SSL/TLS...
It looks like no new ActivePerl releases support Image::Magick (nor XML::LibXML) -- or does anybody know otherwise?
I am looking at Strawberry Perl, but I also cannot get Image::Magick to install there. Has anyone actually gotten Image::Magick to install/work on a modern Strawberry Perl?
Thanks,
R
|
Modifying/Accessing hash in separate script
4 direct replies — Read more / Contribute
|
by mdskrzypczyk
on Jul 22, 2015 at 14:29
|
|
|
Hello Monks, yesterday I posted a question about refactoring code for a company and now I have another question about doing so. The previous writers of the code never used use strict/use warnings so I'm trying to make it so that the code uses these. Needless to say the moment I add them a bunch of scope errors get thrown. My question is such, there is a script that contains purely hashes that contain "global" info. Such as work directories, server names, departments, etc. These hash structures are modified by a subroutine that reads a config file and adds entries to the hash to reflect the contents of the config file, and then this configured hash is used in the main automation scripts. It works currently without strict/warnings but when I turn it on it complains. I believe I read something about packages and using "our" for the hashes but I haven't read anything specific enough to my issue to trust it completely. Can anyone help me set up the files correctly for this kind of processing? e.g. 1. Global hash lives in a separate folder, 2. Said hash is modified by a subroutine only once when scripts called(this was done by a config subroutine that lives in a script that was required by another script, I believe that when this script was required it invoked the subroutine and configured the hash), 3. this global hash is properly accessible by all other automation scripts. Thank you for any insight I greatly appreciate it.
EDIT(Sorry for not being detailed enough):
So I have a hash that lives in one script, say global.pl, it looks like this:
%GLOBAL = {
dir1 => "/path/to/directory"
dir2 => "/path/to/another"
}
Then I have another script that contains a subroutine to modify this hash, as well as a call to said subroutine, I believe that this call gets run when the script is "required". This is how that script looks, lets call it config.pl:
[requires at the top]
read_config_files();
sub read_config_files(){
#Do operations that read .txt files and load
#info into the GLOBAL hash structure.
}
Now these previous scripts are never explicitly called, the main automation script looks like this, let's call it main.pl:
#A few requires...
require "global.pl";
require "config.pl";
my first_directory = $GLOBAL{dir1};
etc...
So that after the config.pl is "required" it executes the read_config_files subroutine and the %GLOBAL hash looks like this:
%GLOBAL = {
dir1 = "/path/to/directory",
dir2 = "/path/to/another",
newstuff = "a string that was added",
}
But I get a "Global symbol "GLOBAL" requires explicit package name at main.pl line xx.
My question is how can I properly have this working so that the error does not get thrown. Or, how can I convert the way this is working. Note that it is not just main.pl that tries to access members of %GLOBAL this way, there are a few other scripts that get their global information from this hash, therefore I'm trying to only have on copy of the hash and centralize all the info to that hash.
UPDATE:
I've managed to fix the problem by putting the hashes into a package! Thanks for all your help
|
Self-contained Perl installation with custom modules on USB stick for *nix systems
3 direct replies — Read more / Contribute
|
by aoeuidhtn
on Jul 22, 2015 at 06:42
|
|
|
I use Perl to run kpcli, command-line interface to Keepass: http://kpcli.sourceforge.net/. It requires several non-standard modules that can be installed from CPAN. I want to have an USB stick containing a standalone Perl interpreter for both x86 and x86_64 systems, all modules required to run kpcli and kpcli code itself. I want to carry this USB stick around in case I found myself in the situation I need to run kpcli on someone else's machine, a machine without Perl installed or a newly installed system in order to gain access to my password protected Git repository with all config files. But this question is not going to about how to achieve my goal but whether the way I did it is the best and what are the potential pitfalls. What I did is:
- I downloaded Perl source code
- I installed it to ~/perl-install on my PC
- I used ~/perl-install/bin/perl to install all modules required by kpcli using -MCPAN. They were installed to ~/perl-install directory
- I copied a whole ~/perl-install directory to my USB stick
- I plugged an USB stick to another machine and set PERL5LIB to a directory on USB stick like this:
$ export PERL5LIB=$PWD/lib/5.22.0/:$PWD/lib/site_perl/5.22.0/i686-linux:$PWD/lib/site_perl/5.22.0:$PWD/lib/5.22.0/i686-linux
- I run bin/perl kpcli from the USB stick
I compiled and copied a Perl interpreter for both x86 and x86_64 systems because there are some systems such as Slackware that cannot run x86 binaries x86_64 systems out of the box. Is it a good way?
|
Creating Variables Just to Pass into Subroutine?
7 direct replies — Read more / Contribute
|
by mdskrzypczyk
on Jul 21, 2015 at 16:38
|
|
|
Hi guys this is my first post and I've only recently started learning perl so please be gentle. I'm not new to programming but am currently working at a company that uses perl to automate some financial business, my job is to refactor old code because some of it is outdated/not used anymore or could look a little more friendly to the user. My question is with the following code snippet:
$who = "su_and_it";
$subject = "Error processing $filename";
$severity = 3;
$message = "Server: $server\n";
$message = $message."File: $origfile\n";
$message = $message."Error: Departments missing from WWMBR_NAMES.TXT";
$othercontact = "None";
$path = $work_dir;
$file = $missingdeptfilename;
&Notify($othercontact,$who, $severity,$subject,$message,$path,$file
+);
Does it make sense to create these variables and immediately pass them into this subroutine right after? Or should I change the subroutine arguments to just be the values themselves? I feel like creating the variables prior calling the subroutine clutters the code (note that this snippet appears all over the entire set of perl automation but with a few interchanged arguments). Do you have any tips on how to approach this problem? Also, I would greatly appreciate any tips on refactoring perl code in general, methodologies or processes that you may have used before to approach it. I ask because it'll help me develop good scripting practice and also because this is slightly overwhelming to just tackle. Thanks for any help!
|
|
|
New Meditations
|
IBM Cloud Challenge.
1 direct reply — Read more / Contribute
|
by BrowserUk
on Jul 21, 2015 at 18:07
|
|
|
I just read about an IBM programming challenge to try and entice developers to IBMs Bluemix Cloud development environment.
(Don't bother if you're outside the UK; or if you want to use Perl (it ain't supported :(); or if stupid sign-up processes that don't work annoy you; or ... )
What struck me was that the three programming tasks are, at least notionally, so trivial. It took me less than 5 minutes to write (working, but probably not best) solutions to all three.
(Whether they would pass their test criteria I guess we'll probably never know)
I was also struck by this part of the description: that you can put together a programme that can run within a time limit or on limited resources rather than just lashing together a hideous brute-force monstrosity. And that you can actually read the questions properly in the first place (a useful start, but one that's often forgotten).
I think it would be interesting to see how the best Perlish solutions we can come up with compare with those other languages that get entered to the competition; when and if they are actually made public.
So have at them. (Don't forget to add <spoiler></spoiler> tags around your attempts.)
I'd post the questions here but I'm not sure it wouldn't be a problem copyright wise?
I'll post my solutions here in a few days.
|
Beyond Agile: Subsidiarity as a Team and Software Design Principle
3 direct replies — Read more / Contribute
|
by einhverfr
on Jul 20, 2015 at 21:24
|
|
|
This is a collection of thoughts I have been slowly putting together based on experience, watching the development (and often bad implementation of) agile coding practices. I am sure it will be a little controversial in the sense that some people may not see agile as something to move beyond and some may see my proposals as being agile.
What's wrong with the Waterfall?
I think any discussion of agile programming methodologies has to start with an understanding of what problems agile was intended to solve and this has to start with the waterfall model of development, where software has a slow, deliberate life cycle, where all design decisions are supposed to be nailed down before the code is started. Basically the waterfall approach is intended to apply civil engineering practices to software and while it can work with very experienced teams in some limited areas it runs into a few specific problems.
The first is that while civil engineering projects tend to have well understood and articulated technical requirements, software projects often don't. And while cost of failure in dollars and lives for a civil engineering disaster can be high, with software it is usually only money (this does however imply that for some things, a waterfall approach is the correct one, a principle I have rarely seen argued against by experienced agile developers).
The second is that business software requirements often shift over time in ways that bridges, skyscrapers, etc don't. You can't start building a 30 floor skyscraper and then have the requirements change so that it must be at least 100 floors high. Yet we routinely see this sort of thing done in the software world.
Agile programming methodologies arose to address these problems. They are bounded concerns, not applicable to many kinds of software (for example software regulating dosage of radiotherapy would be more like a civil engineering project than like a business process tool), but the concerns do apply to a large portion of the software industry.
How Agile is Misapplied
Many times when companies try to implement agile programming, they run into a specific set of problems. These include unstructured code and unstructured teams. This is because too many people see agile methodologies as devaluing design and responsibility. Tests are expected to be documentation, documentation is often devalued or unmaintained, and so forth.
Many experienced agile developers I have met in fact suggest that design is king, that it needs to be done right, in place, and so forth, but agile methodologies can be taken by management as devaluing documentation in favor of tests, and devaluing design in favor of functionality.
There are many areas of any piece of software where stability is good, where pace of development should be slow, and where requirements are well understood and really shouldn't be subject to change. These areas cut against the traditional agile concerns and I think require a way of thinking about the problems from outside either the waterfall or agile methodologies.
Subsidiarity as a Team Design Principle
Subsidiarity is a political principle articulated a bit over a hundred years in a Papal encyclical. The idea is fairly well steeped in history and well applicable beyond the confines of Catholicism. The basic idea is that people have a right to accomplish things and that for a larger group to do what a smaller group can therefore constitutes a sort of moral theft. Micromanagement is therefore evil if subsidiarity is good but also it means that teams should be as small as possible, but no smaller.
In terms of team design, small teams are preferable to big teams and the key question is what a given small team can reasonably accomplish. Larger groupings of small teams can then coordinate on interfaces, etc, and sound, stable technological design can come out of this interoperation. The small team is otherwise tasked with doing everything -- design, testing, documentation. Design and testing are as integrated with software development as they are in agile, but as important as they are in the waterfall approach.
This sort of organization is as far from the old waterfall as agile is but it shares a number of characteristics with both. Design is emphasized, as is documentation (because documentation is what coordinates the teams). Stability in areas that need it is valued, but in other areas it is not. Stable contracts develop where these are important and both together and individually, accomplishments are attained.
Subsidiarity as a Technical Design Principle
Subsidiarity in team design has to follow what folks can accomplish but this also is going to mean that these follow technological lines as well. Team responsibility has to be well aligned with technological responsibility. I.e. a team is responsible for components and components are responsible for functionality.
The teams can be thought of as providing distinct pieces of software for internal clients, taking on responsibility to do it right, and to provide something maintainable down the road. Once a piece of software is good and stable, they can continue to maintain it while moving on to another piece.
Teams that manage well defined technical and stable components can then largely end up maintaining a much larger number of such components than teams which manage pieces that must evolve with the business. Those latter teams can move faster because they have stability in important areas of their prerequisites.
But the goal is largely autonomous small teams with encapsulated responsibilities, producing software that follows that process.
|
Advanced techniques with regex quantifiers
5 direct replies — Read more / Contribute
|
by smls
on Jul 19, 2015 at 05:29
|
|
|
Lately I've been experimenting again with using Perl regexes more like grammars, i.e. parsing inputs via a single big regex that involves lots of branching, instead of the traditional approach of parsing inputs via imperative "spaghetti code" that sequentially matches lots of small regexes.
However, I quickly ran into two limitations relating to regex quantifiers (* + {}). Here's a write-up of the solutions/workarounds I found, both for my own benefit (so I can refer back to them), and in case others might find it interesting.
Also, I'd love to hear the opinions of other monks on which of these techniques should be used in real code, and if it would be worth adding new Perl 5 core featues to make them obsolete.
TOC:
|
|
|
|