Beefy Boxes and Bandwidth Generously Provided by pair Networks
go ahead... be a heretic
 
PerlMonks  

Meditations

( #480=superdoc: print w/replies, xml ) Need Help??

If you've discovered something amazing about Perl that you just need to share with everyone, this is the right place.

This section is also used for non-question discussions about Perl, and for any discussions that are not specifically programming related. For example, if you want to share or discuss opinions on hacker culture, the job market, or Perl 6 development, this is the place. (Note, however, that discussions about the PerlMonks web site belong in PerlMonks Discussion.)

Meditations is sometimes used as a sounding-board — a place to post initial drafts of perl tutorials, code modules, book reviews, articles, quizzes, etc. — so that the author can benefit from the collective insight of the monks before publishing the finished item to its proper place (be it Tutorials, Cool Uses for Perl, Reviews, or whatever). If you do this, it is generally considered appropriate to prefix your node title with "RFC:" (for "request for comments").

User Meditations
perldoc.perl.org needs a facelift?
1 direct reply — Read more / Contribute
by EvanCarroll
on Feb 05, 2019 at 14:49
    Where is the source code for this site? Is this another search.cpan thing where no one really knows what the story is? Is the site open source? Why when you click on a link is there neither an ability to view the source code for the module, or to link back to metacpan
    https://perldoc.perl.org/File/stat.html
    With MetaCPAN mature now, wouldn't it be in the best interests of the Perl community to redirect all of those links to metacpan and shut the site down?
    Note it's not even current that doc on File::stat is 5.26. The latest release (current on meta.cp) is 5.28 (released June 23, 2018)


    Evan Carroll
    The most respected person in the whole perl community.
    www.evancarroll.com
Technique for executable modules
2 direct replies — Read more / Contribute
by tlhackque
on Feb 04, 2019 at 10:31

    Over the years, I've seen a number of techniques used to enable a Perl module to have a dual-life as as a command. These range from commented out mainline code (used for testing) to requiring the command line user to invoke a 'run' function. And include invoking 'caller' to guess the mode, inspecting $0, or even '$Pkg::FooMode='module'; require Pkg::Foo;'.

    None has seemed entirely satisfactory: there always seem to be either functional or esthetic compromises. Often with some burden on the end user(s) as well as the author.

    Here is a technique that, with minimal one-time setup, hides the ugliness from all users, yet has minimal impact on the developer.

    Assume that your script is to be installed in /opt/sbin, and that your Perl library includes /usr/lib/perl5/site_perl/5.99.0/.

    Your script looks like:

    #!/usr/bin/perl package TL::MyPackage; use warnings; use strict. our $VERSION = 1.0; sub new { my $class = shift; my $obj = [ @_ ]; return bless $obj, $class; } ... package main; use warnings; use strict; unless( __FILE__ =~ /\.pm$/ ) { # Skip if loaded as a module # Command-line interface require Getopt::Long; Getopt::Long->import( qw/GetOptions :config bundling/ ); my $verbose; GetOptions( 'verbose|v!' => \$verbose ) or die "Command error\n"; my $foo = TL::MyPackage->new( $verbose, ... ); print $foo->rub( 'lamp' ); ... exit(); } 1;
    And the installation looks like:
    mkdir -p /opt/sbin cp -p mycommand /opt/sbin/ mkdir -p /usr/lib/perl5/site_perl/5.99.0/TL ln -s ../../../../../../opt/sbin/mycommand \ /usr/lib/perl5/site_perl/5.99.0/TL/MyModule.pm

    This allows the user to treat the module as an ordinary command - no CamelCase name, 'funny' .pm extension, or '-e run()' to remember (or wrap in another script). No worries for the module author about strange ways it might be loaded, forgetting to shut off test code before release, or having 'shift' default to '@_' instead of '@ARGV'. (S)he can, of course, still provide a 'run' function if desired, but it's not required. And of course, lazy loading the command-only modules reduces the dual-life expense when used as a pure module.

    There is no unusual magic about the locations chosen for this explanation: the .pm symlink goes in the library where 'use' & 'require' can find it, and the executable goes in PATH. Of course, you can invert the direction of the symlink if you think of the module as primary and the executable as secondary. TMTOWTDI; tastes vary. Note that File::Spec->abs2rel is a convenient way to generate the symlink in an installation script. You do want to use a relative target in case the file is on a mountpoint.

n-dimensional statistical analysis of DNA sequences (or text, or ...)
2 direct replies — Read more / Contribute
by bliako
on Jan 20, 2019 at 17:22

    The recent node Reduce RAM required by onlyIDleft is asking for efficient shuffling of a DNA sequence. As an answer, hdb suggested (Re: Reduce RAM required) to build the probability distribution of the DNA sequence at hand (simply as a hash of the four DNA bases ATGC each holding the count/probability of each base appearing). Then one can ask the built distribution to output symbols according to their probability. The result will reflect the statistical properties of the input.

    I added that a multi-dimensional prob distribution could create DNA sequences closer to the original because it would count the occurence of single bases, ATGC, as well as pair of bases AT,AG,AC,.., and triplets, and so on. So, I concocted this module, released here, which reads some input data consisting of symbols, optionally separated by a separator, and builds its probability distribution, and some other structures, for a specified number of dimensions, or the ngram-length. That statistical data can then be used to predict output for a given seed.

    For example, for ndim = 3 the cumulative distribution will look like this:

    CT => ["A", 0.7, "G", 1], GA => ["T", 0.8, "C", 1], GC => ["A", 0.2, "T", 1], GT => ["A", 1],

    meaning that CT followed by A appears 7/10 times whereas the only other alternative is CT followed by G which appears 3/10. The above structure (and especially that it has cumulative probs) enables one quite easily to return A or G weighted on their probabilities by just checking if rand() falls below or above 0.7. And so I learned that the likelihood of certain sequences of bases is much larger, even an order of magnitude, than others. For example:

    AAA => 0.0335, CCC => 0.0158, GGG => 0.0158, TTT => 0.0350, GCG => 0.0030, ...

    Another feature of the module is that one can use a starting seed, e.g. AT and ask the system to predict what base follows according to the statistical data built already. And so, a DNA sequence of similar statistical properties as the original can be built.

    The module I am describing can also be used for reading in any other type of sequence, not just DNA sequences (fasta files). Text for example. And the module, then, becomes a random text generator emulating the style (i.e. the statistical distribution in n-dimensions) of the specific corpus or literature opus.

    The internal data structure used is a hash of hashes which even for 4 dimensions & literature, it is kept reasonably small, because it is usually very sparse. For example, Shelley's Frankestein 3-dimensional statistical distrbution can be serialised to a 1.5MB file. So, huge data can be compressed to the multi-dimensional probability distribution I am describing if all one wants is to keep creating clones of that data with respect to statistical properties of the original (and not an exact replicate of the original).

    Of course finite input data may not encompass all the details of the process which produced it and such a probability distrbution even over n-dimensions may prove insufficient for emulating the process.

    There are a few modules for doing something similar in CPAN already but I wanted to be able to read huge datasets without resorting to intermediate arrays for obvious reasons. And I wanted to be able to have access to the internal data representing the probability distribution of the data.

    Also, I wanted to read the data once, built the statistical distribution and save it, serialised to a file. Then I could do as many predictions as I wanted without re-reading huge data files.

    Lastly, I wanted to implement an efficient-to-store n-dimensional histogram so-to-speak using the very simple method of hash-of-hashes with the twist that one can also interrogate the HoH by means of: what follows the phrase I took my?

    And here are four three scripts for reading DNA or text sequences and doing a prediction. Available options can be inferred by just looking at the script or look at the examples further down.

    analyse_DNA_sequence.pl:

    analyse_text.pl:

    predict_text.pl:

    Here is some usage:

    $ wget ftp://ftp.ncbi.nih.gov/genomes/Homo_sapiens/CHR_20/hs_ref_GRCh3 +8.p12_chr20.fa.gz # warning ~100MB # this will build the 3-dim probability distribution of the input DNA +seq and serialise it to the state file $ analyse_DNA_sequence.pl --input-fasta hs_ref_GRCh38.p12_chr20.fa --n +gram-length 3 --output-state hs_ref_GRCh38.p12_chr20.fa.3.state --out +put-stats stats.txt # now work with some text, e.g http://www.gutenberg.org/files/84/84-0. +txt (easy on the gutenberg servers!!!) $ analyse_text.pl --input-corpus ShelleyFrankenstein.txt --ngram-lengt +h 2 --output-state shelley.state $ predict_text.pl --input-state shelley.state

    I am looking for comments/feature-request before publishing this. Once it is published I will replace the code in this node with links. Please let me know asap if I am abusing resources by posting this code here.

    bw, bliako

[RFC] Discipulus's step by step tutorial on module creation with tests and git
6 direct replies — Read more / Contribute
by Discipulus
on Dec 19, 2018 at 06:14
    Good morning nuns and monks,

    Nothing to read during next holidays? ;=)

    I wrote this tutorial and I'll really appreciate your comments and corrections.

    The following tutorial is in part the fruit of a learn by teaching process, so before pointing newbies to my work I need some confirmations.

    The tutorial is a step by step journey into perl module development with tests, documentation and git integration. It seemed to me the very minimal approach in late 2018.

    Being a long post (over the 64kb perlmonks constraint) the second part is in a reply to this node and because of this, the table of content links are broken for the second part (I'll fix them sooner or later).

    This material is already on its github repository with a long name (I hope can be easier to find). Also the code generated in this tutorial has its own archived repository.

    I'll gladly accept comments here or as pull request (see contributing), as you wish, about:

    • the testing part: i've managed only the basics of testing but i'm quite new in this; please review
    • errors in the overall discussion of the matter
    • english errors or misuses of terms (I tend to construct phrases in a latin way.. sorry ;)
    • git related errors

    The module code presented is.. fairly semplicistic and I do not plan to change it: the tutorial is about all but coding: tests, documentation, distribution and revision control are the points of this guide and I tried to keep everything as small as possible. If you really cannot resist to rewrite the code of the module, rewrite it all and I can add a TIMTOWTDI section, just for amusement.

    By other hand the day eight: other module techniques has room for improvements and additions: if you want to share your own tecquiques about testing, makefile hacking, automating distribution I think this is the place. I choosed module-starter to sketch out the module as it seemed to me simple and complete, but it has some quirks. Other tools examples can be worth another day of tutorial, but keep it simple.

    When you have commented this tutorial I'll remove the [RFC] in the title and I'll point newcomers to this guide (or better if it will be reposted in other section?), if you judged it is worth to read.

    Thanks!

    L*

    UPDATE 20 Dec. Added a readmore tag around the below content. The online repository is receiving some pull requests ;) so I added a version number to the doc. Tux is very busy but pointed me to Release::Checklist and I'l add it to the tutorial.


Python tricks
6 direct replies — Read more / Contribute
by pme
on Dec 18, 2018 at 07:21
Inserting Code Before an -n/-p Loop
3 direct replies — Read more / Contribute
by haukex
on Dec 17, 2018 at 15:51

    Probably most people know about the "Eskimo greeting" "secret operator". I'm not sure if the following trick is common knowledge, but I just saw it for the first time in this blog post by Yary:

    $ perl -MO=Deparse -M'5;print "foo"' -ne '}{print "bar"' sub BEGIN { require 5; () } print 'foo'; LINE: while (defined($_ = readline ARGV)) { (); } { print 'bar'; }

    A neat little trick :-)

RFC: Set::Select: get intersection or union of sets; or more generally, the set of elements that are in one or more input sets
2 direct replies — Read more / Contribute
by kikuchiyo
on Dec 12, 2018 at 16:58

    If we have two sets, it is considered a solved problem to get their union or intersection or symmetric difference, there is even an entry in perlfaq4 about it. The situation is slightly more complicated if we have more than two input sets, because then it's a valid question to ask to e.g. get the set of elements that are in the first or second set but not in the third and fourth etc. The number of combinations grows rapidly with the number of input sets, and just writing ad hoc solutions to each little problem becomes infeasible. So a more general solution is needed - the hard part is designing the user interface so that it is able to express all the possible combinations of selections in a general and flexible, yet efficient and understandable manner. A cursory search of CPAN brought up several (abandoned?) modules in the Set::* namespace, but none of them was exactly what I needed.

    I have the outline of an attempted solution. It's an OO module that has a constructor to which the input sets can be fed, and one method called select, which accepts a selector string argument and emits (an arrayref of) the elements that match the selector. If we have 3 input sets, then the '110' selector string selects all elements that are in the first and second sets but not in the third.

    #!/usr/bin/perl { package Set::Select; use strict; use warnings; sub new { my ($class, $args, @sets) = @_; my $attr; $attr = $args->{key} if (ref $args eq 'HASH' and exists $args->{ke +y}); my $self; for my $i (0..$#sets) { for my $elem (@{$sets[$i]}) { my $key = defined $attr && ref $elem eq 'HASH' ? $elem->{$ +attr} : $elem; $self->{$key}->[1] //= $elem; $self->{$key}->[0] //= '0' x @sets; vec($self->{$key}->[0], $i, 8) = 0x31; } } bless $self, $class; } sub select { my ($self, $bits) = @_; return [map { $self->{$_}->[1] } grep { $self->{$_}->[0] =~ $bits +} keys %$self]; } } package main; use strict; use warnings; use Data::Dumper; my $x = Set::Select->new({}, [1, 3, 5, 7], [2, 3, 6, 7], [4, 5, 6, 7]) +; print Dumper $x->select($_) for qw/100 101 111 10. ... /; my $y = Set::Select->new({key => 'id' }, [{id => 1, value => 1}, {id => 3, value => 1}, {id => 5, value => +1}, {id => 7, value => 1}], [{id => 2, value => 2}, {id => 3, value => 2}, {id => 6, value => +2}, {id => 7, value => 2}], [{id => 4, value => 3}, {id => 5, value => 3}, {id => 6, value => +3}, {id => 7, value => 3}], ); print Dumper $y->select($_) for qw/100 101 111 10. ... /;

    A Venn diagram that may or may not make the intent clearer:

             .---.
            /  1  \
           |       |
        .--+--. .--+--.
       /   | 3 X 5 |   \
      |    |  / \  |    |
      |  2  \/ 7 \/  4  |
      |     |`---'|     |
      |      \ 6 /      |
       \      \ /      /
        `------^------'
    

    I think these selector strings as the primary (and only) user interface are better than the possible alternatives that come to mind: a verbose, ad hoc query language would have to be explained at length in the documention, tested carefully in the source, and parsed painfully at runtime, while a forest of arbitrarily named methods to select this or that subset would bloat the code needlessly and make it harder to use.

    Using regular expressions opens the door to abuse, but it also allows convenient and terse selector strings, makes the implementation efficient, and it's something people already know.

    If the elements are hashrefs (representing a record or object or something), there is a mode to use not the elements themselves but a named key inside them as the basis of selection, as the second example shows. This mode can be considered buggy as it is now, because only one version of a record with the same key is stored (in the example, some values are discarded. I don't have a good solution for this problem yet, partly because it would make the implementation slower and more complicated, partly because I don't know what would be the right thing to do.

    Questions:

    • Is this useful to anyone?
    • How to make it better?
    • What would be a good name if this were to become a module? I've tentatively chosen Set::Select but it may be too generic.
Camel vs. Gopher
4 direct replies — Read more / Contribute
by reisinge
on Dec 08, 2018 at 14:16

    I've been using Perl for several years mostly for small to medium sized programs of sysadmim type (automation, gluing, data transformation, log searching). Recently I started to learn Go. I wanted to write something in both languages and compare. Here goes.

    The Perl code is more than 2 times smaller:

    $ ls -l x.* | perl -lanE 'say "$F[8]\t$F[4] bytes"' x.go 694 bytes x.pl 294 bytes

    Perl code is more than 4 times slower when run ...

    $ time go run x.go > /dev/null real 0m1.222s user 0m1.097s sys 0m0.220s $ time perl x.pl > /dev/null real 0m5.358s user 0m4.778s sys 0m0.497s

    ... and more than 5 times slower when I built the Go code:

    $ go build x.go $ time ./x > /dev/null real 0m0.947s user 0m0.890s sys 0m0.126s

    The code generates 10 million random integers from 0 to 9. Than it counts the occurrence of each generated integer and prints it.

    $ cat x.go package main import ( "fmt" "math/rand" "time" ) func main() { // Seed the random number generator seed := rand.NewSource(time.Now().UnixNano()) r1 := rand.New(seed) // Generate random integers var ints []int for i := 0; i < 10000000; i++ { n := r1.Intn(10) ints = append(ints, n) } // Count ints occurrence count := make(map[int]int) for _, n := range ints { count[n]++ } // Sort ints var intsSorted []int for n := range count { intsSorted = append(intsSorted, n) } // Print out ints occurrence for n := range intsSorted { fmt.Printf("%d\t%d\n", n, count[n]) } } $ cat x.pl #!/usr/bin/perl use warnings; use strict; # Generate random integers my @ints; push @ints, int rand 10 for 1 .. 10_000_000; # Count ints occurrence my %count; $count{$_}++ for @ints; # Print out ints occurrence for my $int ( sort keys %count ) { printf "%d\t%d\n", $int, $count{$int}; }

    In conclusion I must say that I like both languages. I like beer too :-).

    Always rewrite your code from scratch, prefefably twice. -- Tom Christiansen
Delegating responsibility of one's CPAN distributions
4 direct replies — Read more / Contribute
by stevieb
on Dec 04, 2018 at 20:21

    Say, for example, I get eaten by a bear, roll my truck off into the lake, get crushed by falling mountain boulders or otherwise burn in a fire, I'm wondering what will happen with my CPAN distributions.

    I mean, I won't care if I'm dead, but if they are being used (from what I can tell, they are in a 'minimalistic' sense (I don't know numbers)), what happens? By default after years of trying to understand, they'll fall into a state of disrepair and then go through the normal channel of adaption.

    I'm wondering what my fellow Monks think about this.

    My question here, is would it be worth working it up the chain to have a "will" of sorts; someone you could "dedicate" your distributions to, within the Makefile (or whatever dist thingy one uses). A new attribute, effective across all build platforms and accepted by CPAN, that acknowledges who you want to oversee what you've written.

    I'm not talking about co-auth here. I'm talking about someone who may not even care about one's work. I'm talking about someone who cares about Perl enough that one would feel comfortable with rightfully distributing one's distributions accordingly, because they are somewhat familiar with the Perl ecosystem.

    This is totally off the wall, but I've been through so much in the last 24 months, that I'm trying to think of everything.

    Would a IF_I_DIE flag within a Makefile.PL that is easily searchable be a good idea, or an idea of a madman who keeps buying sensors to write Perl around?

Debugging preprocessor
1 direct reply — Read more / Contribute
by jo37
on Nov 28, 2018 at 17:51
    Hello monks and nuns,

    tired of coding print-statements for debugging purposes again and again while being too lazy to use the debugger I recently wrote a module Debug::Filter::PrintExpr that - acting as a Perl filter - transforms some special comments into debugging print statements. As an example, this piece of code

    use Debug::Filter::PrintExpr; ... my $some_var = 'some content'; my @array = qw(this is an array); my %hash = (key1 => 'value1', key2 => 'value2'); #${$some_var} #@{@array} #${$array[3]} #%{custom_label: %hash}
    would generate code that produces an output like:
    line 28: $some_var = 'some content' line 29: @array = ('this', 'is', 'an', 'array') line 30: $array[3] = 'array' custom_label: %hash = ('key2' => 'value2', 'key1' => 'value1')
    I'd provide the module here if someone is interested. But maybe this is just weird stuff.

monk tags
1 direct reply — Read more / Contribute
by Aldebaran
on Nov 26, 2018 at 19:47

    Hello Monks

    I've been deeply ensconced in getting some relatively-minor and low-level details squared away, but I find it hugely technical and unpredictable as I try to grasp what is happening at my terminal. In order to share it properly, it has to be pasted into a text document. Under such circumstances, I like having pairs of code tags at the ready. One can edit them later on. I seem to do better when I start the write-up early, especially with mystifying and hard-to-replicate output.

    Likewise, a handful of p tags are necessary, and why not line them all up in shiny rows, so that we can see their completeness and symmetry. A short interview with this program gets you the tags you need. How many times have you created the tags from scratch and bolloxed it up? Machines can do it better.

    Output then source:

    The output doesn't show on perlmonks. It does in the file that is formed from the munged time. This becomes the prep file for the write-up. Tags are arrayed in order, with nothing between open and close tags, so that's hardly surprising. I wanted to write this utility before I did anything more, to turn my character defect of laziness into something keystroke-saving in the long haul.

    The source file for Text::Template object is:

    $ cat 1.monk.tmpl <{$symbol}></{$symbol}> $

    I realize that there are much smaller ways to do what I have achieved here and would solicit them. I believe that Path::Tiny will create the directories it needs with the touch method. My pride will not be hurt to see this shortened by great lengths. This isn't going to be the only thing I use Text::Template for in the near term, so I wanted to roll it out with the full framework of Path::Tiny.

    Namaste,

POD <-> Markdown notation
2 direct replies — Read more / Contribute
by stevieb
on Nov 24, 2018 at 17:41

    Are there any projects officially making an attempt at this?

    I'm getting bored of notating different docs between the two markup/down languages. I do have a couple of translation scripts, but they aren't even public as they are inconsistent and often the resulting doc requires some manual work.

    If there are worthy near-prod-consistent efforts, I'll join and help. If not, I'll create something that the community can assist with.

Pls more operators, e.g. <&&=
2 direct replies — Read more / Contribute
by rsFalse
on Nov 15, 2018 at 09:57
    Good day, monks.

    I've seen that Perl likes operators and have plenty. I was amazed after I saw new in 5.22 (Bitwise String Operators).
    And I remember that usually I use to write '$max < $c and $max = $c;' (if not using List::Util qw( max )), which is self repeating. And my idea was to somehow shorten such statements. Here I suggest an operator which shortens previous statement, e.g. these two code would be equivalent: '$max < $c && ( $max = $c );' === '$max <&&= $c;'. New operator '<&&=', with other similar variants: '>&&=', '<=&&=', '>=&&=', and 4 more with '||' in the middle, although they are redundant. Associativity could be the same as of other assignment operators. Another construct suggestion for same operations, would be this: '(comparison_op.)=', which means, that any comparison op can be written inside parentheses, and it corresponds to: '$max comparison_op. $c and $max = $c;'. Isn't this bad idea?
Future of Perl
3 direct replies — Read more / Contribute
by Perlchaoui
on Nov 13, 2018 at 05:51

    Hi Monastery !

    I was reading a post about the future of Perl. I don't know if it is the most appropriate place to provide this feedback. I've seen the statistics in TIOBE on how Perl is loosing places in the rank of the most performed languages. I freshly finished my IT studies and i discovered Perl language now in my company. I wanted to raise the fact that the rank of the language doesn't matter at all. The most important thing is the maintain activity. Why ? Because by default , Perl is a more powerful language ( not only according to me ) than Python or Ruby or others. Perl is not so friendly let's say or common but Perl is doing the job better, because the Perl community is actively "better" and because the language has in his own genetic heritage the basis to be essential. The black dot is coming of lobbying which is not serving Perl interest. And how the language is taught at school. As we are in a monastery, i would like to suggest this precept: If someone need to go somewhere he will need oxygen. Perl is like oxygen ( in the air ) and there is only one thing to do , making this oxygen available ( meaning having a strong Perl community ). And another black dot and not the least: the availability of Perl in growning country. I had discussion with some foreigners, IT engineers and software developers. As suprising as it might seem, some of them don't even know what is Perl language !! It could be difficult to teach Perl in Universities in Western countries from a political point of view due to these lobbies and pressures but it could be different in growning countries. It's important to make partnerships with them ( with their Univeristies and so on..) A good idea ,maybe, would be to organize the Perl conference or any big related Perl event in one of these countries.

    Thanks for reading and the sorry for the spam. I wanted to share my feeling

Why is Perl 4 so popular?
1 direct reply — Read more / Contribute
by Anonymous Monk
on Nov 01, 2018 at 23:37
    Why are so many contenders to the throne of Perl nothing but a cheap copy of Perl 4?

    PHP, Python, Ruby, Javascript all start by copying Perl's worst practices, according to computer scientists, to become immensely popular, with no strict and globals everywhere.

    But the fun never lasts because they eventually succumb to aspersions of computer scientists to add all sorts of cruft to enforce austerities that satisfy obsessively compartmentalized minds.

    I think the reason is this: Languages like Perl force computers to think like people, rather than forcing people to think like computers.

    Don't get me wrong, we need the scientists to build and maintain the playgound so we can play, but we also need them to get the heck out of our way, and to stay away!

    Hard Fork Perl with a trendy cool name and make sure the batteries are included by throwing in a kitchen sink of about 1000 of the most awesome CPAN modules in a way that will do everything and run everywhere and you may have (another) winner.

    Perl 6, seriously, this is sad:

    
    $input.close or die $!;
    close($output);
    
    

Add your Meditation
Title:
Meditation:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":


  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.
  • Log In?
    Username:
    Password:

    What's my password?
    Create A New User
    Chatterbox?
    and the web crawler heard nothing...

    How do I use this? | Other CB clients
    Other Users?
    Others wandering the Monastery: (4)
    As of 2019-02-16 15:22 GMT
    Sections?
    Information?
    Find Nodes?
    Leftovers?
      Voting Booth?
      I use postfix dereferencing ...









      Results (95 votes). Check out past polls.

      Notices?