http://www.perlmonks.org?node_id=21798
User Questions
Switch
1 direct reply — Read more / Contribute
by Cow1337killr
on Aug 06, 2016 at 12:22
Goo-Canvas
3 direct replies — Read more / Contribute
by zentara
on Mar 28, 2008 at 14:11
    The Goo::Canvas enhances the old Gnome2::Canvas by basing itself on Cairo. This allows things like zoomable scalable text, rotated text, saving to pdf/svg, and many other enhancements, that bring it on par, or superior to, Tk::Zinc. The best thing to do is install it and run the fine demo.

    The c library goocanvas-0.9 c is at goocanvas-0.9c

    Compatible Perl module: Goo-Canvas-0.04

File::GetLineMaxLength
No replies — Read more | Post response
by martin
on Jun 30, 2007 at 13:28
    Line-oriented input processing is easy in Perl, thanks to readline and its angle bracket syntax as in $line = <STDIN>. However, these popular constructs lack control over the size of what is returned, as has been lamented on several occasions. Such control would be beneficial where data from unreliable sources has to be handled. File::GetLineMaxLength, availiable on CPAN, created by our fellow monk robmueller, addresses this issue. This review discusses version 1.00.

    The module is not yet listed in the Catalogue, but my guess at its DLSIP status would be adpOp.

    Development stage: a - Alpha testing

    The module works only in very special cases, has some serious implementation flaws and documentation issues. I'll elaborate below.

    Language used: p - Pure Perl

    Support level: d - Developer

    Interface style: O - Object oriented

    Public License: p - Standard-Perl

    Usage

    Example:
    local $/ = "\n"; my $LIMIT = 1024; my $f = File::GetLineMaxLength->new(\*STDIN); my ($line, $tooLong); while (length($line = $f->getline($LIMIT, $tooLong))) { die "line too long" if $tooLong; # process $line }

    File::GetLineMaxLength adds a layer of buffering to an already opened file connection. To read lines, you have to create an object that will hold the state of everything, and repeatedly call the object's getline method.

    The constructor new takes an existing filehandle and an optional buffer size. The input record terminator is taken from $/ at the time of object creation -- I had rather expected either an explicit parameter or the behaviour of IO::Handle, which uses $/ dynamically at the time of each getline call.

    The getline method takes a non-negative integer size limit and optionally a variable. This variable will be set by the method to 1 in case the size limit is hit and 0 otherwise. A limit of zero means no limit. The return value is a string and always defined. After reading to the end of file it is (supposed to be) empty. There is no distinction between normal end of file and error conditions. For the size limitation, line terminator characters are not counted.

    If the number of input characters before a line terminator exceeds the given limit, the returned string will have exactly limit characters and no line terminator, the overflow flag will be set, and the next getline call will continue where the previous one broke off.

    Open Issues

    • Getline goes into an endless loop if the input file does not end with an end of line character sequence.
    • The input record separator must be a nonempty string. Other values of $/ reserved for paragraph mode, file slurp mode or fixed length record mode are neither supported nor detected.
    • The buffering strategy makes the module unsuitable for interactive input or scenarios calling for alternating uses of getline and other file operations.
    • The POD documentation contains a wrong usage example (a while loop terminating on the boolean value rather than the size of the result, and wrong filehandle syntax), although it has a nice reference to PerlMonks.
    • Not amusingly, the README file is unaltered h2xs output with mismatched 0.01 version number, going all "blah blah" on us.

    Conclusion

    File::GetLineMaxLength populates an important and often underestimated niche: tools aiding in interface robustness. It could become useful if it were developed a bit further.

    Update: Changed wording about documentation issues. Why has this section no preview button?

Pod::Webserver
1 direct reply — Read more / Contribute
by rinceWind
on Dec 07, 2006 at 10:34

    This is an example of using CPAN to distribute a script. The guts are in Pod::Webserver, and a script, podwebserver is supplied to call it.

    I came across this module in the first chapter of Perl hacks as Hack #3: "Browse Perl Docs Online". My first thoughts were: "Big deal. There's http://search.cpan.org and ubiquitous internet access."

    But, the differences is that CPAN gives you all modules known to person and beast kind, and the latest versions thereof. Podwebserver gives you your modules - just the ones you have installed, and the versions you have installed. It also doesn't care how they got there: make install, PPM install, apt-get install, rsync from installation master or whatever. It will also include your own custom written and installed modules, ones that will never go anywhere near CPAN, provided they contain POD.

    Install the module Pod::Webserver, and run the script podwebserver in the background. You will then (after some initialisation) be able to point a browser at port 8020 on the node running podwebserver. You can use a different port number if you specify it from the command line.

    This will faithfully reflect @INC, including paths added via $ENV{PERL5LIB}. This is very useful for me in my present work location, as I have a custom hand-picked subset of CPAN (under version control, in a special place pointed to by PERL5LIB) that I make available to other developers.

    I've come across (and reported) a bug that there is an undocumented timeout after 5 hours of inactivity. If this is a problem, it's simple to take care of, by wrapping the script in a shell loop.

SVK
1 direct reply — Read more / Contribute
by rinceWind
on Aug 27, 2006 at 12:48

    I first heard about SVK at a talk given by Chia-Liang Kao at London.pm a couple of years ago. Being something of a version control freak (author of VCS::Lite::Repository), naturally I was interested. It was a good talk, and interesting to see what could be done with version control, with some interesting real world problems.

    I'd filed my thoughts under "something I should look at when I get a mo". Not being a jet set, or a long distance commuter, I didn't have any immediate need for SVK.

    Now, with the Birmingham YAPC::EU imminent, I will be needing to venture out of my home hacking den, and would quite like to continue working on some CPAN module development. Coincidentally, I have moved my subversion repository for my CPAN modules to a hosted box, and gotten webdav over HTTP working.

    To date I have been working with multiple checkouts in different directories, getting to know svn quite well. Now I realise that this is a mess. Time to look at SVK.

    Despite what some have said, I didn't find it at all difficult to install - one command for Debian:

    apt-get install svk

    Granted you could build it from CPAN. You need the svn perl bindings for this to work though, but apt deals with the dependency nicely. I gather that there is a Redhat RPM available, and Windows binaries.

    Then there was the grokking of the tutorials, which had changed since I first looked. In the mean time, being fully up to speed with svn helped, as the svk commands are a superset of the svn commands.

    Once you have SVK installed, you need to issue the following command once, for your user account:

    svk depotmap --init

    This sets up your anonymous "depot", a container for repositories which can be standalone, or can mirror remote svn repositories (or indeed cvs and other version control repositories).

    svk mirror http://foo.bar.com/svn //foo.bar/svn svk sync //foo.bar/svn

    These are all you need to do to get started, hooking to an existing hosted repository. The first command creates an association between the remote repository, and its mirror that lives inside the depot. The second command is used to pull down the whole remote repository (though you can specify a subdirectory if this is all you are ever going to want), including the revision history.

    That's all you need to do online. Everything else now works with your local mirror on your hard drive. All the checking out, merging etc. can be done without an Internet connection. Obviously you need a link when you want to receive other updates, or to publish your changes. There's even a -p option that lets you work with remote repositories where you don't have commit rights (you submit patches).

    In my opinion, SVK rocks! This review is merely scratching the surface, and readers are referred to the full documentation:

    Update: local branches

    xdg is perfectly correct that you want to create a local branch, in order to do your work offline. Your mirror repository is just that; you can't have extra work committed to this, which is not in the remote repository. And, svk will make sure that this is so, blocking commits to the mirror if you are not online.

    See xdg's reply below for the commands to create a new local branch and check out from it.

    When you are connected and want to release your work, use the following command:

    svk smerge --baseless --incremental --verbatim //local/foo.bar //foo.b +ar/

    The --incremental and --verbatim options do multiple commits, merging in the change history from your branch; omit these if you are happy with a single changeset and commit for everything. You need --baseless the first time you merge, as at this point, SVK has no common ancestor for the merge process.

Sman
No replies — Read more | Post response
by Khen1950fx
on Jul 10, 2006 at 00:09
    Sman is an enhanced version of 'apropos' and 'man -k', or a cross between perldoc and grep. I use it all the time because of its unique features: It supports complex natural language text searches; it will produce an extract of the man page with the searched text highlighted; it allows searches by manpage section, title, body, or filename; it indexes the entire manpage; lastly, it uses a prebuilt index for extremely fast searches.

    Sman has just one dependency: SWISH-E 2.4 or above. SWISH-E is a great search tool too.

    You can download Sman via CPAN. After you install it, you need to run sman-update---this will build the necessary index to enable it to work properly. On the cmd, just type: /usr/local/bin/sman-update --verbose. That's it! I think you'll enjoy it as much as I do.

    For SWISH-E: http://www.swish-e.org/Download

Module::Compile::TT
5 direct replies — Read more / Contribute
by dragonchild
on Jun 17, 2006 at 16:38
    How many of you have written code that looks something like:
    package Some::Class; use strict; use warnings; sub new { my $class = shift; return bless { @_ }, $class; } sub foo { my $self = shift; $self->{foo} = shift if @_; return $self->{foo}; } sub bar { my $self = shift; $self->{bar} = shift if @_; return $self->{bar}; } sub baz { my $self = shift; $self->{baz} = shift if @_; return $self->{baz}; } sub do_something_useful { ... }
    Come on, raise your hands. I know I've done this at least a hundred times. Then, I learned about closures and went back and rewrote that code to look something like:
    package Some::Class; use strict; use warnings; sub new { my $class = shift; return bless { @_ }, $class; } foreach my $name ( qw( foo bar baz ) ) { no strict 'refs'; *{ __PACKAGE__ . "::$name" } = sub { my $self = shift; $self->{$name} = shift if @_; return $self->{$name}; }; } sub do_something_useful { ... }
    Now, instead of 98% of the Perl community being able to maintain my code, I'm down to 0.98%. Several managers I've worked for had made me take out code like that, and for good reason. Just because they hired a Perl expert to write the code doesn't mean that they'll be able to hire someone like that to maintain the code. So, it's back to repetition, right?

    <Trumpets sound in the distance /> Module::Compile::TT to the rescue! That code using typeglobs and closures now looks like:

    package Some::Class; use strict; use warnings; sub new { my $class = shift; return bless { @_ }, $class; } use tt; [% FOREACH name IN [ 'foo', 'bar', 'baz' ] %] sub [% name %] { my $self = shift; $self->{[% name %]} = shift if @_; return $self->{[% name %]}; } [% END %] no tt; sub do_something_useful { ... }
    Whoa! That actually looks readable! Everyone knows how to read TT directives (or they're close enough to your favorite templating module as to be no difference).

    But, isn't this a source filter? Well, technically, it is. But, there's a major difference between this and Filter::Simple. Module::Compile::TT compiles this once and installs a .pmc file that you can look at and edit. Or, you could just run TT against this module and see what would happen.

    Contrast that to Filter::Simple that won't generates potentially anything and you have no (sane) way of finding out what happened.

    The real dealbreaker for me is that I feel pretty sure I could take this to any manager I use to work for and they would all be comfortable with that kind of code in their production codebases. This is code that can be maintained by the masses.

Carp::Clan
3 direct replies — Read more / Contribute
by Aristotle
on Jun 08, 2006 at 08:26

    From the docs:

    In case you just want to ward off all error messages from the module in which you “use Carp::Clan”, i.e., if you want to make all error messages or warnings to appear to originate from where your module was called (this is what you usually used to “use Carp;” for ;-)), instead of in your module itself (which is what you can do with a “die” or “warn” anyway), you do not need to provide a pattern, the module will automatically provide the correct one for you.

    Before I discovered this module, I would play silly games with local $Carp::CarpLevel = $Carp::CarpLevel + 1 sprinkled all over the place. Not only was that annoying, it also hatefully causes Carp to emit verbose messages. Now I just use Carp::Clan and things work as I meant them to.

    Of course, that’s not the module’s only use – but that alone makes the module worthwhile to use everywhere, even if you don’t have a “clan” of modules.

    My only question is: why is this not part of the core? Why indeed doesn’t Carp itself work that way?

Pod::Usage
2 direct replies — Read more / Contribute
by skx
on Jan 08, 2006 at 08:40

    This module allows you to write scripts which contain their own documentation internally using Pod markup.

    The documentation can then be displayed to a user without having to write your own "print" statements, or duplication.

    Requirements

    None. (Ships as part of Perl 5.8 5.6.)

    Who Should Use It

    Anybody who is writing complex command-line scripts which would benefit from included documentation, and who doesn't wish to describe the programs command line arguments more than once.

    Bad Points

    None that I could tell.

    Example Notes

    Once I started using this module I found that it was incredible easy to start writing documentation for functions and little tutorials inside my code.

    The fact that the '--manual' flag, (or whatever you like), can be made to display the Pod text from your script is very useful.

Text::MicroMason
1 direct reply — Read more / Contribute
by Aristotle
on Dec 25, 2005 at 04:55

    I really, really wanted to like this module. The documentation is great, the interface is bliss, the Mason syntax is cool, and having multiple syntaxes is excellent.

    Unfortunately, my experience was so painful that I gave up after a day of using it. Maybe it would have been different if I had come to it with a blank slate, but that is not my situation: I am currently getting increasingly fed up with Template Toolkit’s crippled mini language (a rant for another day), so I was looking for a different system that would let me use Perl in my templates.

    I have a batch of templates that need to be ported – but that has proven absolutely impossible. The error reporting is so indescribably abysmal that the excercise turned out worse than pulling teeth. Some errors segfaulted perl! I have no idea what the precise problem was, I gave up much before I could triangulate its exact location in the template. And triangulate I had to: simple template syntax errors result in nothing but an error on the line in your own code which invokes the compiled template; grave errors produce an enigmatic error from deep in the bowels of the module. If you’re lucky, there’s a dump of generated, hard to read Perl code from which you can try to decipher the problem. If you’re particularly unlucky, you get a segfault.

    Now, I’m possibly an outlier. It’s quite conceivable that these problems are far less of an issue when you’re starting from scratch with an empty file, instead of trying to port an existing template from one syntax to another by applying incremental search-and-replace patches and then running the result to see what breaks. But this experience still makes me wary for maintenance programmers who come in months after the fact to tweak a template.

Date::Simple
1 direct reply — Read more / Contribute
by shiza
on Jul 14, 2005 at 18:54

    Date::Simple - This module has become my first choice when needing to work with dates. It is what it is named. A simple, intuitive interface to date manipulation.

    Here's a list of its handy features:
    • Date validation
    • Interval arithmetic
    • Day-of-week calculation
    • Transparent date formatting
    • OO interface with a lot of very useful methods
Class::DBI
3 direct replies — Read more / Contribute
by stonecolddevin
on Apr 26, 2005 at 11:56
    Class::DBI is quite a simple module really, mainly you inherit it's methods and go off about your business.
    SQL is no longer a problem in the way of causing minor issues with your code, so you can retrieve, create, update or do any other database modifications necessary with the greatest of ease.
    The only requirements are to set up your primary index columns, and the columns you need to use for the selects in a module that inherit's from Class::DBI, and then you have total access to Class::DBI's magic.
    (example of setup)

    Other useful methods:
    add_constructor
    Allows you to construct an SQL query snippet and call it through your object.
    Music::CD->add_constructor(new_music => 'year > ?'); my @recent = Music::CD->new_music(2000);

    retrieve_from_sql
    Allows you to consruct your own SQL query like so:
    (NOTE: You inlining the entire WHERE clause)
    my @cds = Music::CD->retrieve_from_sql(qq{ artist = 'Ozzy Osbourne' AND title like "%Crazy" AND year <= 1986 ORDER BY year LIMIT 2,3 });

    Class::DBI::AbstractSearch A search class provided by Class::DBI that allows you to write arbitrarily complex searches using perl data structures, rather than SQL.
    my @music = Music::CD->search_where( artist => [ 'Ozzy', 'Kelly' ], status => { '!=', 'outdated' }, );


    These are just a few of the features of Class::DBI that I have found quite useful. Check the module out for yourself, it cut my development time in at least half.
Pod::Simple::HtmlBatch
No replies — Read more | Post response
by doom
on Dec 01, 2004 at 00:22
    I went once around the block with a number of the existing perl tools for pod to html processing, and I wrote up what I found: Pod to Html.

    My goal was to find a simple way to extract the pod from a tree of multiple *.pm/*.pod files and convert it into a tree of html files. And if you're nodding your head saying "Of course, that's so simple!", you probably haven't looked into this very closely.

    Summary: Pod::Html and pod2html are known to be very weak, limited tools (and ditto pods2html, which is closer to what I needed). There are many alternative methods, none of which have become standards as of yet. My suggestion is to keep an eye on Sean M. Burke: his Pod::Simple::HTMLBatch turned out to be almost exactly what I was looking for: it crunches the documentation for a tree of modules, and generates readable html, converting L<> links into relative html links.

    An honorable mention also goes to Mark Overmeer who looks like he's developing some interesting ideas with his OODoc (some custom extentions to pod markup to deal with large OO projects).

    By the way: it's a funny thing, but pod processing modules almost always seem to have very sketchy pod...

WWW::Amazon::Wishlist
No replies — Read more | Post response
by monkfan
on Nov 07, 2004 at 20:29
    This module is a good alternative to Net::Amazon, for this specific task. Unlike Net::Amazon, which is limited up to 50 wishList entries, WWW::Amazon::Wishlist, exhaust all its content. This can be useful for web applications that uses Amazon rich database, to play around with Amazon hack ;-)

    Caveat: as mentioned by the author this module is sensitive to HTML template changes.
    Below is the sample code to extract wishlist:
    #!/usr/bin/perl -w use strict; use WWW::Amazon::Wishlist qw(get_list COM); #version 1.4 my $my_amazon_com_id = '26Y2G652628T3'; #Your id can be found in your wishlist's (compact version) URL my @wishlist = get_list ($my_amazon_com_id, COM); for ( 0 .. $#wishlist ) { print"[$_] Title :",$wishlist[$_]->{title},"\n", " Author:",$wishlist[$_]->{author},"\n", " ASIN :",$wishlist[$_]->{asin},"\n", " Price :",$wishlist[$_]->{price},"\n", $wishlist[$_]->{type},"\n"; }
Data::TreeDumper
No replies — Read more | Post response
by Anonymous Monk
on Jul 07, 2004 at 19:09
    I'll make history this time and start something new in the review section. Instead for reviewing Data::TreeDumper (which I wrote), I'll ask oter to review it. Shoot, grind and flamme at your heart's content.

    Data::TreeDumper can be found here: http://search.cpan.org/~nkh/. installing the Bundle is the easiest way. This module works on linux and should work on Windows though it is not tested on that platform. I think this module is as useful for beginers as it is for seasoned perl geeks

    DESCRIPTION

    Data::Dumper and other modules do a great job at dumping data structure but their output sometime takes more brain to understand the dump than it takes to understand the data itself. When dumping big amounts of data, the output is overwhelming and it's difficult to see the relationship between each piece of the dumped data.

    Data::TreeDumper dumps data in a trees like fashion hopping for the output to be easier on the beholder's eye and brain. But it might as well be the opposite!

    ....

    Considered: Arunbear Move to Meditations; not a review
    Unconsidered: ysth - Keep/Edit/Delete = 3/19/2 - moving from reviews isn't supported
    Considered: Arunbear Delete; not a review
    Unconsidered: ysth - Keep/Edit/Delete: 12/4/13 - enough keep votes