Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl Monk, Perl Meditation
 
PerlMonks  

What to test in a new module

by Bod (Parson)
on Jan 28, 2023 at 22:16 UTC ( [id://11149996]=perlquestion: print w/replies, xml ) Need Help??

Bod has asked for the wisdom of the Perl Monks concerning the following question:

I've created a helper function for my own purposes and thought it would be useful to others. So CPAN seems a sensible place to put it so others can use it if they want to...

It's function is simple - to go to the homepage of a website and return an array of URI's within that site, being careful not to stray outside it, that use the http or https scheme. It ignores things that aren't plain text or that it cannot parse such as PDFs or CSS files but includes Javascript files as links (thing like window.open or document.location.href) might be lurking there. It deliberately doesn't try to follow the action attribute of a form as that is probably meaningless without the form data.

As the Monastery has taught be that all published modules should have tests, I want to do it probably and provide those tests...

But, given that there is only one function and it makes HTTP requests, what should I test?

The obvious (to me) test is that it returns the right number of URIs from a website. But that number will likely change over time, so I cannot hardcode the 'right' answer into the tests. So beyond the necessary dependencies and their versions, I'd like some ideas of what should be in the tests, please.

In case you're interested, this came about from wanting to automate producing and installing sitemap files.

Replies are listed 'Best First'.
Re: What to test in a new module
by SankoR (Prior) on Jan 28, 2023 at 22:44 UTC
    Include sample pages with the dist that your code should handle correctly. Include URIs that aren't supposed to be gathered by your code, tricky URIs, etc. When you fix bugs later, add tests that make sure you have no regressions.

    Refactor your code so you can call and test the 'logic' without grabbing a remote page.
Re: What to test in a new module
by kcott (Archbishop) on Jan 29, 2023 at 00:50 UTC

    G'day Bod,

    Obviously, without seeing the module, I can only give generalised information. The following describes how I do testing. It's fairly standard but different authors have their own way of doing things. I also have my own naming conventions: not dissimilar from what many others use (but certainly not universal). I'd suggest looking around CPAN and seeing what others have done.

    Firstly, my (skeleton) module directory layout tends to follow this pattern:

    Some-Module/ Changes Makefile.PL MANIFEST MANIFEST.SKIP lib/ Some/ Module.pm README t/ *.t test files here

    Test files are typically split into two groups: those generally run for any installation; and, Author Only tests which are normally skipped for a general make test.

    General Tests

    The first is always "00-load.t". It's short, simple, and just tests that "use Some::Module;" works. It uses Test::More::use_ok() and typically looks something like this:

    #!perl -T use strict; use warnings; use Test::More tests => 1; BEGIN { use_ok('Some::Module') } diag "Testing Some::Module $Some::Module::VERSION";

    If an object-oriented module, the next test is "01-instantiate.t". The complexity of this script will depend on whether there are any instantiation arguments and, if so, whether they are required or optional. Here's a simple example, paraphrased from a real test script:

    #!perl -T use strict; use warnings; use Test::More tests => 3; use Some::Module; my $sm; my $eval_ok = 0; eval { $sm = Some::Module::->new(); 1; } && do { $eval_ok = 1; }; is($eval_ok, 1, 'Test eval OK'); is(defined $sm, 1, 'Test Some::Module object defined'); isa_ok($sm, 'Some::Module');

    Individual methods and functions are tested next. Wherever possible, I put tests for each method or function in their own separate scripts. These follow the same naming conventions; for example, "02-some_method.t", "03-some_function.t", and so on. Here you need to test all possible argument combinations and return values. Consider as many problematic use cases as possible and test that they are all handled correctly; add more tests as other problems are encountered (either from your own work or bug reports from others).

    I tend to put all tests in their own anonymous block:

    #!perl -T use strict; use warnings; use Test::More tests => N; use Some::Module; { # Isolate tests with one set of arguments my $sm = Some::Module::->new(...); my @args = (...); is($sm->meth(@args), ... } { # Isolate tests with a different set of arguments my $sm = Some::Module::->new(...); my @args = (...); is($sm->meth(@args), ... }

    There's a plethora of modules in the "Mock:: namespace". Although I haven't used it myself, Test::Mock::LWP looks like it might be useful for you. I didn't spend any time searching for you; this one just happened to stand out; do have a look around for others.

    Author Only Tests

    These are really only for you. They generally represent sanity checks of the module code, and ancillary files, in your distribution. They are typically triggered by an environment variable having a TRUE value; and are skipped otherwise.

    In the spoiler below, I show three scripts that I've pulled verbatim from a random, personal distribution; these are standard for me (with, potentially, some variation in version numbers). I actually use these with all of my $work modules as well (although, they do have a few more as standard).

    — Ken

Re: What to test in a new module (TDD)
by eyepopslikeamosquito (Archbishop) on Jan 29, 2023 at 01:48 UTC

    I've created a helper function for my own purposes and thought it would be useful to others ... As the Monastery has taught me that all published modules should have tests, I want to do it probably and provide those tests ... what should I test?

    Bod, you are asking this question too late! The Monastery has also taught you to write the tests first because the act of writing your tests changes and improves your module's design:

    Writing a test first forces you to focus on interface - from the point of view of the user. Hard to test code is often hard to use. Simpler interfaces are easier to test. Functions that are encapsulated and easy to test are easy to reuse. Components that are easy to mock are usually more flexible/extensible. Testing components in isolation ensures they can be understood in isolation and promotes low coupling/high cohesion. Implementing only what is required to pass your tests helps prevent over-engineering.

    -- from "Test Driven Development" section at Effective Automated Testing

      The Monastery has also taught you to write the tests first because the act of writing your tests changes and improves your module's design

      You are quite correct - as usual...

      However, I have extraordinary cognitive problems with doing this. Trying to work out what a module is going to do and how it will do it before writing a line of code is quite a leap of conceptualism for me. I do not doubt that I could learn this cognitive skill if coding and module design were my job but they are very much a sideline. At 55 my brain's plasticity is fading a little I notice which doesn't help.

      Over in this node it was suggested that I might like to create a module for Well Known Binary (WKB) from the work I had already done to read one file. I started writing the tests for that module but it has ground to a halt because of the issue above.

      Back to this "module"...
      It didn't start out as a module. It started as a bit of throw away code to build an array. It then turned into a sub in a small script for my own very limited use. Then, and only then, did I think it might be helpful to other people as it is a relatively general building block.

      I don't think tests are necessary for bits of throw away code. Nor for simple scripts that are only intended to be used by me.
      Do you think otherwise?

        Tests for "throw away" code, no. Tests for scripts only for me, a qualified no - if it is important the script is "correct" or subject to revision over time (hmm, isn't that anything that's not "throw away) then test can be very useful to avoid regressions. Test for public facing code, solid yes.

        For code that evolves from throw away, to personal use, to "lets make this a module" it seems sensible that tests should evolve from none, to maybe some, to something that looks like TDD. Aside from anything else. casting the code in a TDD framework forces you to think about the scope of the code and how other people might use it. Thinking about usage and scope shapes the API. TDD then helps codify the API and test its utility and suitability.

        Agile programming advocates often suggest that the code is the documentation, but with TDD the tests are the documentation. In a sense TDD is about writing the documentation, or at least the problem description before you write the code, and that seems like an altogether good thing to do. Thinking about what code should to before you write it can't be a bad thing surely?

        Optimising for fewest key strokes only makes sense transmitting to Pluto or beyond
        Hello Bod,

        > However, I have extraordinary cognitive problems with doing this..

        you can try to follow my step-by-step-tutorial-on-perl-module-creation-with-tests-and-git to see if you get some inspiration.

        L*

        There are no rules, there are no thumbs..
        Reinvent the wheel, then learn The Wheel; may be one day you reinvent one of THE WHEELS.

        I don't think tests are necessary for bits of throw away code. Nor for simple scripts that are only intended to be used by me. Do you think otherwise?

        No. Bod, I think you're doing a great job. I trust you appreciate from my numerous Coventry working-class asides, I just enjoy teasing you. :)

        Of course, working as a professional programmer for large companies is a completely different ball game. If you ship buggy code that upsets a big important customer, you might even be subjected to a probing Five Whys post mortem. For still more pressure, as indicated at On Interfaces and APIs, try shipping a brand new public API to thousands of customers, with only one chance to get it right.

        I might add that when I'm doing recreational programming (as I've been doing quite a bit lately) I tend to just hack out the code without using TDD. In the tortuously long Long List is Long series, for example, I haven't written a single test, just test the output of each new version manually via the Unix diff command. Update: finally wrote my first LLiL unit test on Mar 01 2023.

        Of course, I could never get away with that at work, where you are not permitted to check in new code without passing peer code review (where you will be grilled on how you tested your code) and where you will typically check in accompanying unit and system test changes in step with each code change.

        For my personal opinion on how to do software development in large companies see: Why Create Coding Standards and Perform Code Reviews?

      > write the tests first

      How are you supposed to write the tests before you write the code? What is being tested if there is no code? I searched for examples of this technique but could only find buzz word salad.

        The term TDD is unfortunate because API design is fundamentally an iterative process, testability being just one (crucial) aspect. For public APIs you simply don't have the luxury of changing the interface after release, so you need to get it right, you need to prove the module's testability by writing and running real tests before release.

        More detail on this difficult topic can be found in the "API Design Checklist" section at On Interfaces and APIs. One bullet point from that list clarifies the iterative nature of TDD:

        • "Play test" your API from different perspectives: newbie user, expert user, maintenance programmer, support analyst, tester. In the early stages, imagine the perfect interface without worrying about implementation constraints. Design iteratively.

        My Google search for "test driven design" got me Test-driven_development as a first hit. That is a short article that hits the high points and directly answers your objection - the tests fail until the code they test is written and is correct (at least in the eyes of the tests).

        TDD is a technique I use occasionally, but in each case I've used it the result has been spectacular success. When I have used TDD I've also used code coverage to ensure a sensibly high proportion of the code is tested. In my experience the result was seeming slow progress, but substantially bug free and easy to maintain (i.e. high quality) code as a result.

        Not all projects can use TDD. My day job is writing hardware specific embedded code for in house developed systems. Testing software embedded in hardware is challenging!

        Optimising for fewest key strokes only makes sense transmitting to Pluto or beyond

        This seemed like a perfectly reasonable question. I gave it an upvote which resulted in: "Reputation: 0". So, someone had downvoted your post. Why? Because you had the temerity to question dogma? I wasn't impressed with this but there's little I can do about it.

        As with many things, there's a spectrum with many shades of grey between black and white at the extremities. It is rare for either "black" or "white" to be optimal; a compromise somewhere in the "grey" is usually the best option. This applies equally to software development: writing all of the code first, then bolting on tests afterwards, is a bad move; similarly, writing all tests first, which will obviously fail until the code is written afterwards, is also a bad move; what is needed is a compromise.

        What follows is how I achieve this compromise. I'm not suggesting this is in any way perfect; it is, however, something to consider in terms of the principles involved. Probably the main point is that the "black" and "white" extremes are avoided.

        I start most modules with module-starter and use Module::Starter::PBP as a plugin. I like the templating facilities provided by Module::Starter::PBP but not the templates themselves (so I've edited those quite substantially). I have many versions of the configuration which vary depending on: personal code, $work code, Perl version, type of module, and so on — the following refers to personal code for v5.36.

        This gives me a directory structure along the lines described above. The module code looks like this:

        package Some::Module; use v5.36; our $VERSION = '0.001'; 1; __END__ =encoding utf8 ... POD templates and boilerplate ...

        The t/ directory will contain equivalents of the three 99-*.t Author Only scripts shown above, and a template for 00-load.t which looks like:

        #!perl use v5.36; use Test::More tests => 1; BEGIN { use_ok('__MODULE__') } diag "Testing __MODULE__ $__MODULE__::VERSION";

        Applying a global s/__MODULE__/Some::Module/ to that file gives me a working distribution. I can now run the well-known incantation:

        perl Makefile.PL make make test

        I have created application and test code in unison: the compromise.

        From here, the specifics will vary with every module; however, the main principle is to add small amounts of functionality and concomitant tests incrementally. Continue doing this until all functionality is coded and has tests.

        In closing, I'll just note that the OP's title had "What to test"; I've added "[When to test]" to my title indicating this subthread is straying from the original. We actually don't know if Bod had already written all of his tests except the one he asked about, or if he was adding tests as an afterthought. Assuming the latter, and rebuking him for it, was a mistake in my opinion.

        — Ken

        I'm going to get eaten alive for this but TDD is a something I think people adhere to in a dogmatic fashion without a lot of thought put into API ergonomics and organic development.

        I am 100% in agreement that your code needs to be tested to the point before you reach diminishing returns. I do not feel like cementing yourself in place by writing your tests first is the way to accomplish this.

        You write your tests first and now a) you are now going to try to fit your implementation into that mold and b) you now have 2 things to refactor until you reach stable parity with your design and implementation.

        Unless you're designing and writing code against a predefined spec/RFC, I really don't feel like strict adherence to TDD is beneficial. Code needs to develop organically and allowed to form its own flow instead of being hammered into a predefined hole of a certain shape.

        Three thousand years of beautiful tradition, from Moses to Sandy Koufax, you're god damn right I'm living in the fucking past

Re: What to test in a new module
by bliako (Abbot) on Mar 12, 2023 at 16:42 UTC

    In order to avoid cementing an API before implementation but also stick with the "everything-starts-with-a-test" approach (which I appreaciate its benefits), I like to break down my code into smaller functions, each with the simplest possible API.

    For example, fetch_url($urlstr), html2dom($htmlstr), extract_urls_from_dom($dom), is_url_pointing_to_pdf($urlstr). And I leave the user-calling function las. Until I reach the time to implement it, I am already testing these simple functions and the final user-calling function's API is crystallising in my head.

    So, I start with a test! But for the bricks so-to-speak of the app. And in doing so, I slowly slowly settle on where to place the loo.

    p.s. SaNkoR's Refactor your code so you can call and test the 'logic' without grabbing a remote page. is good: use locally-fetched html to test your code rather than hitting the sites with the risk of your tests failing each time they change. On this, I put network-access tests in author tests, or live tests which are only executed by me and not by the potential user.

    bw, bliako

Re: What to test in a new module
by stevieb (Canon) on Mar 12, 2023 at 21:04 UTC
    "what should I test?"

    The parsing mechanics. There's no need to test the underlying net access stuff, it tests itself. Also, please don't enable by default internet bound tests in the unit test suite. These should be developer-only tests, with an env var for a user to enable them if they wish.

    Set up a data directory within your test suite with a bunch of various HTML files with various URLs, and test the parsing functions.

    If you need to test error codes and return values, you can mock out a request/response.

Re: What to test in a new module
by stevieb (Canon) on Jul 07, 2023 at 02:06 UTC

    Your distribution doesn't appear to be in a VCS. I wanted to create a pull/merge request, but to no avail, and I really don't feel like creating and sending patches so I'll just give some feedback on Business::Stripe::Webhook.

    • There's no need to eval $VERSION: $VERSION = eval $VERSION;
    • It is best common practice to absorb your parameters in a list. Instead of my $class = shift;, you should be doing my ($class) = @_;
    • Reduce a ton of noise by eliminating quotes in hash keys when they don't contain anything other than alphanum and underscore. This: $vars{'error'} becomes this: $vars{error}
    • Why are you doing an eval on this line... you do nothing at all if eval fails: $vars{'webhook'} = eval { decode_json($vars{'payload'});};
    • Most of your comments are superfluous... the ones before each sub are unneeded. Your sub name should exactly state what it's doing; in your case, they do... remove the comments. Use comments to explain WHY something is happening, not what is happening.
    • No need for this: $self->{'error'} = '';. It'll be autovivified if needed.
    • Avoid returning throughout an entire subroutine. It makes it very hard for a reader to see what's happening (there are other reasons, but this is a big one). Instead of a bunch of return undef; inside of many if/else statements, create a flag at the top of the sub, and have each if/else set it. Return at the end of the sub.
    • This is pedantic, but you only need the dereference operator after the first instance of a reference in a chain. This $self->{'webhook'}->{'type'}; can be written as $self->{webhook}{type};. Again, reduces noise.
    • You can interpolate hash values. Instead of my $signed_payload = $sig_head{'t'} . '.' . $self->{'payload'};, you can simply do: my $signed_payload = "$sig_head{t}.$self->{payload}";
    • Touching on an above topic, this: my $self = shift; my %keys = @_; is better written as my ($self, %keys) = @_;
    • There's no need for a return; at the end of a sub if nothing's being returned (unless you intentionally are returning undef;

    There's some other oddity (don't have the time to figure out why you're using STDOUT and STDERR instead of just using available built-ins, nor can I sort out at a glance why you're using the & for certain calls. However, hopefully you find some use in my quick code review here.

      Instead of my $class = shift;, you should be doing my ($class) = @_;
      my $self = shift; my %keys = @_; is better written as my ($self, %keys) = @_;

      I disagree. The object (or class name, in the case of static calls) is qualitatively different from the rest of arguments. I also think it's only fair to point out that, except in the one case of %keys = @_ you cited, he very consistently used the shift style only when there were no further arguments, and otherwise used the style you advocate, e.g. my ($self, $subscription, $secret) = @_;

      Reduce a ton of noise by eliminating quotes in hash keys when they don't contain anything other than alphanum and underscore. This: $vars{'error'} becomes this: $vars{error}

      I disagree. quotes aren't noise. Worse, in my opinion, is the inconsistency in having some keys with quotes and others without which arises when, eventually, you have some keys which must be quoted.

      Why are you doing an eval on this line... you do nothing at all if eval fails: $vars{'webhook'} = eval { decode_json($vars{'payload'});};

      What's the mystery? Did you miss the following line? Clearly, decode_json could throw, but he doesn't care about that, he just wants an undef in that case. And he checks for it.

      Instead of a bunch of return undef; inside of many if/else statements, create a flag at the top of the sub, and have each if/else set it. Return at the end of the sub.

      I disagree. He is usually doing this when an error has occurred and there's no point in proceeding. Essentially, this is him using a magic / out-of-band return value (undef) instead of throwing an exception. It is an entirely valid way of doing things. (Of course, there is much debate out there on whether it's better than exceptions.) More importantly, it makes the code much, much simpler than a multiway/nested if-else construction.

      $self->{'webhook'}->{'type'}; can be written as $self->{webhook}{type};. Again, reduces noise.

      It's a matter of style only; and some people may find it nicer. Again: not noise. You seem to think that any characters which could possibly be eliminated are necessarily "noise".

      You can interpolate hash values

      You can... but concatenating is faster.

      nor can I sort out at a glance why you're using the & for certain calls

      Indeed,

      &{$self->{$hook_type}}($self->{'webhook'});
      is better written as
      $self->{$hook_type}}->($self->{'webhook'});
      ... and IMHO this is one reason why I might be tempted to use deref arrows at other points in the reference chain besides the first: for consistency of style. (I don't, though. Not usually. When I do, it's for other reasons. :-)

        You can... but concatenating is faster

        I hadn't thought of the speed implications. But I guess concatenation is always going to be faster than interpolation.

      "Your distribution doesn't appear to be in a VCS"

      It seems to be on github. bod see META_MERGE.

        It has been uploaded to GitHub...but I haven't done much with it there. Any development is still being done with local copies whilst I slowly learn how to properly use GitHub...

        The META files are something I need to get an understanding of. It's on my list of things to investigate and no doubt there will be a question to The Monestry at some point...

      It's a case of TIMTOTWDI, and I have not looked at the original code, but two comments:

      No need for this: $self->{'error'} = '';. It'll be autovivified if needed.

      It will autovivify as undef if it is not assigned to before it is accessed. That could cause issues later on with other code.

      There's no need for a return; at the end of a sub if nothing's being returned (unless you intentionally are returning undef.

      It's safer to explicitly return as otherwise the value of the last expression is returned. FWIW, this is covered in PBP on p197 under Implicit Returns, with a follow-on under Returning Failure on p199 (although that also encourages the Contextual::Return module which I don't use).

        It's safer to explicitly return as otherwise the value of the last expression is returned.

        He did say "unless you intentionally are returning undef". I agree with stevieb on that point; however, in my code if I am intentionally returning undef I make that explicit with return(); I never use return; in my code unless I am returning early from a sub which is not expected to return a value. (If the caller insists on doing something with the returned value from such a sub, TNMP.)

      Lots of good advice in there. I'm not averse to a bunch of returns inside a sub especially if it simplifies the code and am also quite happy with my $x = shift; so long as there is only one arg being retrieved. As time goes on I am becoming less tolerant of unnecessary punctuation as I have to pause and question why it is there so that gets a big thumbs-up from me.

      There's no need to eval $VERSION: $VERSION = eval $VERSION;

      This was covered a few weeks ago where the original source for it was unearthed. Still not convinced of the need for massaging the version like this but if I were I would go with Haarg's approach instead - for clarity if nothing else.


      🦛

      Brief reply as I'm about to go to bed but I'll reply to this one now and look properly at the rest over the next few days...

      Why are you doing an eval on this line... you do nothing at all if eval fails: $vars{'webhook'} = eval { decode_json($vars{'payload'});};

      It is quite possible that $vars{'payload'} could contain badly formed JSON. If it does, decode_json calls die (or something equally terminal). I want to catch that so that my module doesn't exit. Then the user of the module can handle the error as they need. If the module were to die, the return to Stripe would be invalid or, more likely, non-existent which may cause issues with future webhook calls.

      You can interpolate hash values. Instead of my $signed_payload = $sig_head{'t'} . '.' . $self->{'payload'};, you can simply do: my $signed_payload = "$sig_head{t}.$self->{payload}";

      I'm aware that I can but that doesn't mean I should.
      To my mind, referenced variables are clearer outside interpolation

      There's no need for a return; at the end of a sub if nothing's being returned (unless you intentionally are returning under;

      I am intentially returning under. The value is returned earlier in the method.
      It is possible that a rogue return may have been left in. However, I've been bitten in the past with a sub returning an unexpected value because there was no explicit return statement. So I now nearly always include one...

      However, hopefully you find some use in my quick code review here.

      Yes, very useful. Thank you for taking the time to review the code.

      Reduce a ton of noise by eliminating quotes in hash keys when they don't contain anything other than alphanum and underscore. This: $vars{'error'} becomes this: $vars{error}

      The reasons I single quote hash keys are:

      1. it makes them obvious as literals because a text editor highlights them.
      2. it differentiates them from interpolated hash keys - e.g. $vars{"key_$count"};
      3. it is easier to expand into variable keys

      I think of 3 in the same way I been told that:

      if ($condition) { $var = 1; }
      is better than:
      $var = 1 if $condition;
      because we might want to add another statement into the code block.

      Likewise, I feel:

      $vars{'example'} = 1;
      is better than:
      $vars{example} = 1;
      because we can expand it easily if necessary and create something like:
      $vars{'example' . $count} = 1;

      Others may have different opinions, clearly they do as there is a lot of code with unquoted hash keys, but this works for me in terms of clarity, expandability and maintainability. I don't see any downside to is on performance (please correct me if there is!) so I think I shall keep this one the way it is.

        Better is in the eye of the beholder and depends on too many preferences to define properly.

        e.g. run () if $chased is exactly the same code as $chased and run ();. It fully depends on your brain, the size of the team (1 is a valid size) and the style/rules the team has agreed on, the code consistency and the context which of the two to prefer. Personally I hate statement modifiers, as it does not fit my brain, so I prefer "expression and action" over "action if expression", but it it neither better nor worse.

        With your reasoning, *I* always prevent single quotes wherever possible, so that /when/ I see them, they are special. I made an exception in a CPAN module that I want to be as portable as possible, even under the most strict release versions of perl, even some experimental ones.

        I personally find $vars{"example $count"} way more readable and intuitive than $vars{'example ' . $count}

        Your milage may/will vary. Be consistent, which is way more important than being conventional.


        Enjoy, Have FUN! H.Merijn

        In my experience, there are two ways of using hashes. One is as a lazy-programmer's object. You see these kinds of hashes returned from database queries, received from REST API requests, etc. Something like this:

        my $uncle = { name => 'Bob', age => 45, };

        In these cases, the keys are an already-known set of strings, usually picked to be identifier-like (no spaces, weird punctuation, etc), so I will usually not quote keys in this kind of hash.

        The other way of using hashes is as basically a mapping from values of one type to values of another type. For example:

        my %ages = ( 'Alice' => 41, 'Bob' => 45, 'Carol' => 38, );

        Quoting the keys for this kind of hash is useful because you never know when another value is going to be added which isn't a safe identifier ($ages{"d'Artagnan"} = 62) plus it helps visually distinguish the second kind of hash from the first kind of hash.

        Do I consistently follow this rule? Absolutely not.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://11149996]
Approved by johngg
Front-paged by kcott
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others admiring the Monastery: (4)
As of 2024-06-14 15:25 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found

    Notices?
    erzuuli‥ 🛈The London Perl and Raku Workshop takes place on 26th Oct 2024. If your company depends on Perl, please consider sponsoring and/or attending.