Beefy Boxes and Bandwidth Generously Provided by pair Networks
P is for Practical

The Monastery Gates

( #131=superdoc: print w/ replies, xml ) Need Help??

Donations gladly accepted

If you're new here please read PerlMonks FAQ
and Create a new user.

New Questions
Getopt::Long defaults for options with multiple values
3 direct replies — Read more / Contribute
by PetaMem
on Nov 21, 2014 at 07:23

    Reviewing some middle-aged code, I stumbled across this topic...

    Unfortunately, the G:L documentation is silent about default values for options with multiple values: Getopt::Long#Options-with-multiple-values. Even more unfortunate seems an inconsistency compared to defaults for options with single values and maybe a semantic inconsistency at all. If you have a single value option, you may define a default like:

    my $tag = 'foo'; # option variable with default value GetOptions ('tag=s' => \$tag);

    This works as expected. Good. You can - of course - do a similar thing for options that take multiple values:

    my $listref = ['a','b','c']; # option variable with default valu +es GetOptions ('list=s{,}' => $listref);

    If you omit the -list option, the program will have the default value, which is good. If you, however, will give a list option, G:L seems to push that option to the list already given in default, which may have its applications, but is not that great as default behavior. If you want to actually replace the default given, you would have to define defaults the ugly, backward and programmatically DIY way:

    my $listref = []; # no default GetOptions ('list=s{,}' => $listref); my $listref = @{$listref} ? $listref : ['a', 'b', 'c']; # DIY default

    Which is actually code I see right now. Yuck! That can't be right - can it?

        All Perl:   MT, NLP, NLU

Runaway CGI script
3 direct replies — Read more / Contribute
by Pascal666
on Nov 19, 2014 at 11:15
    tl;dr: Somehow a CGI script that doesn't write to disk kept running for about 16 hours after the client disconnected, filled up the disk about 10 hours in, and then freed the space when Apache was killed. Contents of script unknown.

    Fully patched CentOS 7. Woke up this morning to "Disk quota exceeded" errors and:

    # df -h Filesystem Size Used Avail Use% Mounted on /dev/simfs 25G 25G 0 100% / # du -sh 3.9G .
    Top indicated that I had plenty of ram left and a CGI script I wrote yesterday was the likely culprit:
    KiB Mem: 1048576 total, 380264 used, 668312 free, 0 buffe +rs KiB Swap: 262144 total, 81204 used, 180940 free. 33856 cache +d Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COM +MAND 5140 apache 20 0 239888 2348 1756 R 17.3 0.2 144:42.67 htt +pd 14980 apache 20 0 30840 1884 1228 S 15.6 0.2 153:43.94 bou +nced.cgi
    I killed Apache and now my disk and cpu utilization are normal. I didn't have lsof installed so I couldn't see what file was causing the problem.

    Access_log shows only me accessing the script, and error_log shows nothing since I wrote it.

    I wrote this quickly yesterday with no error handling, but worst I expected to happen if there was an error was the script to die. I can't understand how the following could possibly fill up my disk. It appears to work as intended.

    #!/usr/bin/perl use strict; use warnings; use CGI; use CGI::Carp qw(fatalsToBrowser); my $q = new CGI; print $q->header; print $q->start_html('Bounce summary'); my @files = </home/user/Maildir/cur/*>; for (@files){ open(IN, $_); while (<IN>) { last if /The mail system/; } while (<IN>) { if (m#/$#) { print '<p>'; last; } s/</&lt/g; print; } close IN; } print $q->end_html;
    Edited to add:
    Pulling the CGI components out gives nearly identical output to what the web browser tab displays, with no errors showing. The directories I was/will run this against never have subdirectories.

    Having thought about it today, I believe one of my initial assumptions when opening this thread was probably incorrect. As a CGI script it only runs when I access it. I only ran it a couple times in its final state (above). It is probable the stuck version was different, and I simply didn't notice it. It could have run for many hours before crippling the server. I do not make a habit of confirming scripts end when they stop loading or I hit X in my web browser, I just assume Apache will kill them.

    I just really don't understand how a cgi script could stay running without a client attached. I just created one with an intentional infinite loop, and as soon as I hit X Apache killed it.

    From /var/log/messages after I ran "service httpd stop" this morning:

    Nov 19 10:38:44 systemd: httpd.service stopping timed out. Killing. Nov 19 10:38:44 systemd: httpd.service: main process exited, code=kill +ed, status=9/KILL Nov 19 10:38:44 systemd: Unit httpd.service entered failed state.
    "kill -9 14980" probably would have fixed the problem without killing Apache, but I didn't think of it at the time.

    Update 2:
    It is actually trivial to create a cgi script that won't die when the client disconnects. My test above contained a "print" inside the loop. Looks like Apache disconnects STDOUT when the client disconnects which causes print to kill the script. For example, a cgi containing just:

    #!/usr/bin/perl sleep 1 while 1;
    will keep running after the client disconnects, and a "service httpd stop" will yield the same errors as above, however, Apache will kill it after the cgi timeout. So apparently one of my interim scripts entered an infinite loop without a print, but with something that caused Apache's timeout not to kill it. Still no idea how that could use up all free disk space, and then free it immediately when killed.

    I just tried writing to STDERR in the loop, both with "print STDERR" and by trying to read a closed filehandle. In both cases error_log recorded the errors immediately and continued to grow in size. When I experienced the disk full error yesterday one of the first things I checked was the log files. error_log was only 7728 bytes.

Basic Apache2::REST implementation
1 direct reply — Read more / Contribute
by Anonymous Monk
on Nov 19, 2014 at 10:33

    I'm new to Perl and i'm trying to implement REST API using Perl. I installed Apache2::REST , and i couldn't find enough detailed example to implement it (i'm poor in Perl).

    Ok, i see the below details in the document attached with the module ( ,

    <Location />
    SetHandler perl-script
    PerlSetVar Apache2RESTHandlerRootClass "MyApp::REST::API"
    PerlResponseHandler Apache2::REST

    i use Apache2 with Mod Perl, my web root path is /usr/local/apache2/htdocs

    and i see the Sample program/package named MyApp::REST::API in the document

    My question is, where should i place this Package/program ? under ..../htdocs/MyApp/REST/ ? or ..../htdocs/ or how i tried both but no luck,..

    Could any one please guide me to implement this API?

loading libxml2 as a prerequisite
3 direct replies — Read more / Contribute
by jandrew
on Nov 18, 2014 at 19:55


    I have recently loaded a package to CPAN Spreadsheet::XLSX::Reader::LibXML. The package is built on XML::LibXML which requires the libxml2 library in order to build successfully. Strangely this seems to come packaged with the OS for Windows but not for Unix or Linux OS's. I'm wondering if there is a good way to require libxml2 and it's dev package to load on non-windows systems through Makefile.PL magic with a Dist::Zilla plugin, ExtUtils::MakeMaker, or Module::Build in order to auto build this package from CPAN(M|P). I first noticed this issue when I wasn't getting many test reports from Linux / Unix systems in the CPAN Testers page.

    I am not a strong ExtUtils::MakeMaker or Module::Build user so I have relied heavily on Dist::Zilla++ and friends to get my modules out in the past. I will knuckle down and read what I need to but I'm not quite sure where to start. If it were as simple as requiring Alien::LibXML I might have tried to muddle through but I think I'm out of my depth here. Pointers on where to start would be greatly appreciated!

Watching File Changes Under OSX
1 direct reply — Read more / Contribute
by iaw4
on Nov 18, 2014 at 13:45

    there are a number of nice packages on cpan for monitoring files in a cross-OS portable fashion, such as File::Monitor, or File::ChangeNotify, or AnyEvent::Filesys::Notify. I believe, for real-time quick notifications, they rely on Mac::FSEvents. Otherwise, they fall back to very slow scanning.

    the problem is that Mac::FSEvents no longer compiles on osx yosemite. there are now errors in FSEvents.xs. probably a change in OSX.

    is there a different cpan module recommended that provides immediate notification of file changes? I just need a simple blocking call on a few files with a callback.

Archive::Extract alternatives?
1 direct reply — Read more / Contribute
by beginner1010
on Nov 18, 2014 at 10:32
    Hello everybody,

    i got a problem with the "ARCHIVE::Extract" function with .tar archives. It looks like it is only able to handle a specific file size in relation to the system's RAM. Currently we only have 2GB within the Virtual Server which means, that every .tar file above about 1.2GB will freeze the script.

    Simply upgrading the systems memory is unfortunatelly not an option. The System does not have direct internet access so installing any further packages/extensions is a real pain in the back which i would like to avoid if possible.

    I already tried Archive::tar which has the same restrictions. I assume, both function place the archive within the RAM to be able to manipulate it before the actual extraction.
    $Archive::Extract::PREFER_BIN is set to true which does no good because the script is running on a Windows System. Also i found the Archive::Extract::Libarchive which needs the libarchive installed - i did not look further into this yet because it needs further programms to compile it first on the system.

    Is there a way to install a command line tool on Windows so the "$Archive::Extract::PREFER_BIN" will work? Are there any other alternatives to the function? We do not need anything else besides unpacking the files to a specific folder.

    Any help is highly appreciated!

    Tried to reply on your comments but i only get a "Permission Denied" Page :(

    Anyway - Thank you for your quick reply!
    I looked into the ARCHIVE::Tar::Wrapper and so far it looks very promising - how can i install tar on a Windows System? Or can i use any other archiver (WinRar/7zip..), too?

    I totally agree - usually it is just a few clicks to add more memory to a VM but we have quite strict prozesses here. Adding more memory has to be requested through the proper channels and can take up to several months..
utf8 "\xD0" does not map to Unicode at /path/ line line_number, <STDIN> line line_number
3 direct replies — Read more / Contribute
by igoryonya
on Nov 18, 2014 at 08:55

    Also, I get:
    utf8 "\xD1" does not map to Unicode at /path/ line line_number, <STDIN> line line_number.

    I have it with some file names piped from the find program. It happened only with some file names recently, for the first time of the few years that I've been using and developing this program.

    Seems like some of the file names are corrupt.

    When I print out such file names with my program, I get something like:


    Ф\xD1%80\xD1%8Dнк \xD0%9F\xD1%8C\xD1%8E\xD1%81елик. \xD0%9D\xD0%9B\xD0%9F. \xD0%9C\xD0%95Т\xD0%90 \xD0%9Cодел\xD1%8C.webm

    The same file names displayed on the terminal by find before piping to my program display:


    Ф?%80?%8Dнк ?%9F?%8C?%8E?%81елик. ?%9D?%9B?%9F. ?%9C?%95Т?%90 ?%9Cодел?%8C.webm

    As I said, it's the first time I encountered such a problem after a few years of dayly usage of this program.

    here is a sample piping launch of the program from the linux terminal:
    find /some/path -type f| /some/path/ /path/to_folder/with_similar_dir_tree/ -parameters


    I've just noticed, that the file names get truncated after I tried: find /some/path -type f -exec /path/ {} /path/to_folder/with_similar_dir_tree/ -parameters \;
    Path, being provided by {} is being truncated significantly, maybe this is the problem that happens with stdout|stdin.
    Seems like, there is a very small limit on how many characters can be piped or passed by {} or, maybe, the files are being truncated because of an invalid characters.
    I guess, I have to resort to the usage of perl's internal find command.
    I don't see anything wrong with that command, I just wanted my program to be flexible, so it could be used either way: by using it's internal directory traversal or paths being piped from some other program.

Trying to Understand the Discouragement of Threads
7 direct replies — Read more / Contribute
by benwills
on Nov 18, 2014 at 01:18

    In the most sincere and blunt sense of the word, I've been a hack programmer since I started about 20 years ago. I never went deep into learning the art of programming, but could usually Frankenstein together what I needed. I'm stating this so you may consider the source (that would be me) of this question about why the use of threads is discouraged...

    I've spent the last two+ weeks learning(ish) perl to write the leanest and fastest threaded/asynchronous/parallel/forked code to perform a pretty basic task, millions of times a day (downloading web pages). In the process, I tried every forking/parallel/asynchronous/threaded solution I could hack together. I tried every http client I could find. I tested all of them in terms of speed, accuracy, and resource usage. (If it's important: in the end, I went with a pure-perl socket connection (not IO::Socket, but Socket) with some fine tuning of my own.)

    But, more to the point of the question, I found that absolutely no solution competed with threads in any way, shape, or form. Every non-thread solution was much heavier than threads, functioned much slower, and, for whatever reason (additional layers of code?), produced less accurate results and required more "management" in the code.

    Yes, figuring out the right threads solution took longer. But for the best solution (if threads is the best solution), I'll spend an extra few days on it to get it right.

    I've seen the heated debates about thread usage. I've read just about every single piece of threaded code BrowserUK has posted (and couldn't have written what I wrote without his help in the forums). And I've tested it all for my own use. And the answer is clear: threads wins, hands down.

    So: why such severe discouragement? Because it's a little more confusing and not as straightforward? Is there something I'm missing in terms of performance? Is my code situation unique to where threads are outperforming the alternatives, and this is uncommon?

    I found absolutely zero public data on performance comparisons, but lots of assertions about performance that contradicted my own tests.

    So, I'm just confused and, if I'm missing something, would love to know how to look at this differently.

    But, if I'm not confused, then why are threads so actively and severely discouraged? I'm really just trying to understand this.

    And if this isn't the place for this question, let me know where a more appropriate forum would be.

    Thanks for any help/pointers/thoughts you have that could help me understand this better.


Tree in perl
6 direct replies — Read more / Contribute
by saurabh2k26
on Nov 17, 2014 at 09:01

    I use PERL to participate in many online quiz websites like codechef, hackerrank etc. In such contest, installing a module like Graphs is not allowed. I observed that no one attempts such problem using PERL, most of them use C++, but in other problems, people do you PERL a lot.

    I am looking for a approach in PERL without using any module for TREE problems

    Sample problem: find shortest path from a to b

    line 1= no of vertex(v) and no of edges(e)(separated by space)
    Next e lines containing v1 and v2 ie there exists a edge between v1 and v2
    Next line contains a and b
    5 6
    1 2
    2 3
    2 4
    4 5
    1 3
    3 5
    1 5
    So we want shortest path from 1 to 5
New Meditations
Sub signatures, and a vexing parse
2 direct replies — Read more / Contribute
by davido
on Nov 18, 2014 at 16:53

    I was experimenting with the experimental subroutine signatures feature of Perl 5.20 today along with the much maligned prototypes feature of old, and encountered a most vexing parse that interested me. So I wanted to mention it here.

    First, something that is not a problem:

    *mysub = sub : prototype(\@\@) ($left,$right) { ... };

    This parses correctly, and will generate a subroutine named mysub with a prototype of \@\@, and with named parameters of $left and $right, which when called will contain array refs. But this doesn't do much. My real goal was generating several similar subroutines, and called upon map in a BEGIN{ ... } block to do the heavy lifting.

    Here is a contrived example that isn't terribly useful, but that works, and demonstrates the issue:

    use strict; use warnings; no warnings 'experimental::signatures'; use feature qw/say signatures/; use List::Util qw(all); BEGIN { ( *array_numeq,*array_streq ) = map { my $compare = $_; sub :prototype(\@\@) ($l,$r) { @$l == @$r && all { $compare->($l->[$_],$r->[$_]) } 0 .. $#$l } } sub { shift == shift }, sub { shift eq shift } } my @left = ( 1, 2, 3 ); my @right = ( 1, 2, 3 ); { local $" = ','; say "(@left) ", ( array_numeq @left, @right ) ? "matches" : "doesn't match", " (@right)"; }

    Do you see what the problem is? The compiler doesn't care for this at all, and will throw a pretty useless compiletime error:

    Array found where operator expected at line 14, at end of l +ine (Missing operator before ?) syntax error at line 14, near "@\@) " Global symbol "$l" requires explicit package name at line 1 +4. Global symbol "$r" requires explicit package name at line 1 +4. Global symbol "$l" requires explicit package name at line 1 +5. Global symbol "$r" requires explicit package name at line 1 +5. Global symbol "$l" requires explicit package name at line 1 +6. Global symbol "$r" requires explicit package name at line 1 +6. Global symbol "$l" requires explicit package name at line 1 +7. BEGIN not safe after errors--compilation aborted at line 17 +.

    Q: So what changed between the first example, that works, and the second example, that doesn't?

    A: Lacking other cues, the compiler parses  sub : as a label named sub, and thinks that I'm trying to call a subroutine named prototype... and from that point on things are totally out of whack.

    Solution: +. Anything that can remind the parser that it's not looking at a label will do the trick. Parenthesis around the sub : ... construct works, but + is easier, and probably more familiar to programmers who use + to get {....} to be treated as an anonymous hash ref constructor rather than as a lexical block.

    With that in mind, here's code that works:

    use strict; use warnings; no warnings 'experimental::signatures'; use feature qw/say signatures/; use List::Util qw(all); BEGIN { ( *array_numeq,*array_streq ) = map { my $compare = $_; + sub :prototype(\@\@) ($l,$r) { @$l == @$r && all { $compare->($l->[$_],$r->[$_]) } 0 .. $#$l } } sub { shift == shift }, sub { shift eq shift } } my @left = ( 1, 2, 3 ); my @right = ( 1, 2, 3 ); { local $" = ','; say "(@left) ", ( array_numeq @left, @right ) ? "matches" : "doesn't match", " (@right)"; }

    ...or how a single keystroke de-vexed the parse.

    A really simple example that breaks is this:

    my $subref = do{ sub : prototype($) ($s) { return $s; }; # Perl thinks sub: is a lab +el here. };

    I don't really see any way around the parsing confusion in the original version that doesn't work. That perl considers sub : to be a label in the absence of other cues is probably not something that can be fixed without making sub an illegal label. But if I were to file a bug report (which I haven't done yet), it would probably be related to the useless error message.

    This example is fairly contrived, but it's not impossible to think that subs with signatures and prototypes might be generated in some similar way as to fall prey to this mis-parse.

    Credit to mst and mauke on for deciphering why the compiler fails to DWIW.


The First Ten Perl Monks
5 direct replies — Read more / Contribute
by eyepopslikeamosquito
on Nov 16, 2014 at 08:31

    Perl Monks has become a big part of my life, so I thought it would be fun to discover how it come into existence in the first place. In particular, I was eager to learn about its earliest users.

    Please note that I only joined Perl Monks in 2002 and have not met any of the major players from the early days. The content of this node therefore is derived only from independent research and speculation, not first-hand knowledge. So, if any folks who actually witnessed these early historic events are listening, please respond away.

    Perl Monks Origins

    After some random googling on the history of Perl Monks, I hit upon this nugget from, the home page of nate: was my own creation after Slashdot was aquired by It and its sister site, PerlMonks were developed in a CMS I designed called The Everything Engine. The Everything Development Company ran from 1999-2001, and consisted of Ryan "dembones" Postma, Darrick Brown, Tim Vroom, Chromatic, and Robo.
    Apart from using the incorrect case of chromatic, it is possible nate may have forgotten some other Everything developers, or at least interlopers, who appear to be among the first ten registered Perl Monks users, as we shall see later.

    After finding that gem, googling for Slashdot, uncovered Rob Malda:

    Rob Malda (born May 10, 1976), also known as CmdrTaco, is an American Internet content author, and former editor-in-chief of the website Slashdot. Malda is an alumnus of Hope College and Holland Christian High School. In 1997, Malda and Jeff Bates created Slashdot while undergraduates of Hope College. After running the site for two years "on a shoestring", they sold the site to, which was later acquired by VA Linux Systems. Malda ran the site out of the SourceForge, Inc. office in Dexter, Michigan.

    and Jeff Bates:

    Jeff Bates, also known as hemos, is the co-founder of Slashdot along with Rob Malda ("CmdrTaco"). Bates graduated from Holland Christian High School in 1994 and received a Bachelor's degree in History from Hope College in 1998.

    and more, including Jonathan "CowboyNeal" Pater.

    Holland Michigan

    Holland is a coastal city in the western region of the Lower Peninsula of the U.S. state of Michigan. It is situated near the eastern shore of Lake Michigan on Lake Macatawa. Holland was settled in 1847 by Dutch Calvinist separatists, under the leadership of Dr. Albertus van Raalte. Dire economic conditions in the Netherlands compelled them to emigrate, while their desire for religious freedom led them to unite and settle together as a group.

    -- Holland Michigan (wikipedia)

    It seems that most of the major players in the formation of Perl Monks hail from this picturesque and charming city, an outpost of Dutch culture and tradition in the American MidWest, with a population of just 33,000!

    Hope College is a private, residential liberal arts college located in downtown Holland, Michigan, United States, a few miles from Lake Michigan. It was opened in 1851 as the Pioneer School by Dutch immigrants four years after the community was first settled.

    -- Hope College (wikipedia)

    Slashdot, and the Everything Engine, were the brainchild of a bunch of creative Hope College students. In particular, Rob "Commander Taco" Malda, Jeff "Hemos" Bates, Nathan Oostendorp aka nate, Everything Engine architect Darrick Brown, and Tim Vroom all attended Hope College in the late 1990s. I'm pretty sure they also attended Holland Christian High School.

    I gleaned this information only from browsing the internet, so if anyone knows different -- or knows of other influential players from the early days who also hail from Holland Michigan -- please let us know.

    Blockstackers Inc

    In particular, I don't know where early Perl Monks developer chromatic resided in the late 1990s; though he lives in Hillsboro, Oregon nowadays, he may have lived in Holland Michigan back then while working for BlockStackers.

    BTW, nate refers to this company not as BlockStackers, but as "Blockstackers Intergalactic (BSI)". This node indicates that the domain was owned by "Blockstackers Inc, 116 E.18th Holland MI 49423". Curiously Block Stackers Inc still seems to be in business in Holland today, doing Tax Returns. No idea if this is the same BlockStackers Inc that founded Perl Monks however.

    When I was moving to Amsterdam, pschoonveld advised me to demand a certain clause in my contract that translates to "pants not required"

    -- Ovid from (Ovid - be careful around pschoonveld!) Re(3): Favorite Slacking Activity

    Back in the heady Dot-com bubble era, Blockstackers Intergalactic (BSI) certainly seemed like a cool company to work for, located in a beautiful and historic city, with a name derived from Block Stacking Theory, and where wearing pants was optional.

    Top Ten Countdown

    For fun, and using the background information above, let's try to identify the first ten Perl Monks, counting down from ten to one.

    No 10: cinder_bdt

    Node id: 1336; user since: Dec 23 1999; last here: Jun 15 2005; Experience: 8; 1 post.

    I noticed a cinder_bdt account. This tenth PM user could be Bryan D Thomas based on this twitter account. If anyone knows more, please let us know.

    No 9: yiango

    Node id: 1335; user since: Dec 23 1999; last here: Mar 01 2000; Experience: 27; 6 posts.

    I suspect yiango is an early Slashdot or Holland Michigan person because in RE: The computer I use most runs... he says "Hey Tim" in response to vroom, indicating he may know vroom personally. Moreover, slasholic is a Perl script written by "yiango" to query slashdot for new articles.

    No 8: nate

    Node id: 1316; user since: Dec 23 1999; last here: Jan 24 2009; Experience: 572; 24 posts.

    Too easy. This is obviously the father of the Everything Engine, Holland Christian High School and Hope College alumnus, Nathan Oostendorp.

    No 7: pschoonveld

    Node id: 1027; user since: Dec 02 1999; last here: Oct 01 2010; Experience: 695; 29 posts.

    His home node says he is from "Utrecht, The Netherlands". This may well be Patrick Schoonveld, who's linked-in account spookily lists his education as Hope College, Holland Michigan. His CV further lists a stint at SourceForge/Slashdot and indicates he has worked in the Netherlands.

    No 6: sgtbaker

    Node id: 1012; user since: Nov 19 1999; last here: Feb 18 2000; Experience: 4; 1 post.

    This one is quite a mystery. I found this library link where user "sgtbaker" recommends a couple of Perl books. Apart from that, nothing, nada, zilch. If anyone knows, please let us know.

    Update: Found a sgtbaker user on where his school is listed as Hope College and his company as Everything Development Company.

    No 5: vroom

    Node id: 979; user since: Nov 12 1999; last here: Jul 04 2013; Experience: 1007430; 607 posts.

    This is obviously Tim Vroom, the primary founder of the Perl Monks web site. Kudos for that, but how on earth did he manage to accumulate over a million experience points?

    No 4: dbrown

    Node id: 859; user since: Oct 28 1999; last here: Oct 18 2001; Experience: 15; 2 posts.

    This is almost certainly Darrick Brown, Hope College alumnus, outed as an early Everything Developer above by nate.

    No 3: CmdrTaco

    Node id: 857; user since: Oct 27 1999; last here: Nov 03 1999; Experience: 9; 0 posts.

    Though hardly a prolific Perl Monks user, the nickname indicates this is probably the famous Rob "Commander Taco" Malda, co-founder of Slashdot.

    No 2: CowboyNeal

    Node id: 850; user since: Oct 27 1999; last here: Jun 28 2000; Experience: 8; 0 posts.

    The nickname suggests that this is Jonathan "CowboyNeal" Pater of Slashdot fame.

    No 1: paco

    Node id: 846; user since: Oct 27 1999; last here: Oct 27 1999; Experience: 754; 1 post.

    We have saved the best for last!

    754 experience points from just one post! Amazing! The highest rated node of all time! (Update: oops, only 2nd highest after camel code, thanks tye). After setting this place alight as its first user, paco mysteriously vanished. Like many others, I patiently await his return. During my research, there were indications from some that paco may not be a real user, but don't listen to these heretics. I am a true believer. paco will return!


    Everything2 References

    Updated Nov 17 2014: Minor corrections and wording changes; 22 Nov 2014: Added Everything2 References plus minor update to "Blockstackers Inc" section (no pants required) and mystery sgtbaker user.

Log In?

What's my password?
Create A New User
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others surveying the Monastery: (9)
As of 2014-11-22 08:58 GMT
Find Nodes?
    Voting Booth?

    My preferred Perl binaries come from:

    Results (120 votes), past polls