Here is explanation for my feedback:
- Why reverse the order for the arguments? join and
split both first start with the separator, and than the
input. So I changed the order back to strings, input.
Because if you have positionally determined arguments with
one list being variable length, it is usual to have the
variable list at the end of your argument list. In this
case when I made it recursive (and therefore capable of
handling n-dim arrays) you had a variable length list of
things to join and split. So I moved those arguments to
- I don't like to pre-compile the regex's, otherwise
the split couldn't cope with changing delimiters, as in
a text file (see the SYNOPSIS), or with sprintf'ed data.
So I changed that back. Furthermore, I couldn't find any
reference to qr// in manpages. Could you please explain?
Pre-compiling the regexes should not pose a problem for
having patterns that handle multiple delimiters. Could
you try it and report back? As for documents, the docs
on this server are 5.003 specific. (Same as Camel 2.)
Most people are on 5.005 or 5.6. On those machines you
can find out about the feature from the local
documentation using the perldoc utility. In fact in
perldoc -f qr
directs you to perlop/"Regexp Quote-Like
Operators". So try
and then type /Quote-Like to get to the relevant
section. Then /qr and hit 'n' until you get to
the right spot. (The same search/paging tricks work with
utilities like man and less on *nix systems.)
- ...more dimensional arrays are a perl 5 feature,
so I should check for that anyway. Out of time now, so
next version of supersplit.
I already did that with the recursion. :-)
- I really like the recursive approach.
So did I. :-)
- I don't see the need for a separate IO version, so
I changed that back, too. I just try to treat the string
as a filehandle, or try to open it as file (new feature).
I didn't succeed to get supersplit( INPUT ), with INPUT
as a filehandle, to work. That's peculiar, because the
manpage tells me that <$fh$gt;, with $fh='INPUT',
The need is due to your having overloaded the input too
much. For instance if someone tried to use your current
version of supersplit() on an uploaded file from
CGI they would fail miserably. I also really
don't like trying an open and silently failing.
Additionally it is generally a bad idea to limit how your
caller can pass information. What if I really want to
pass you data from a socket? Or from IO::Scalar? Or
from a string I have already pre-processed? Having two
functions, one of which is a wrapper around the other,
for that situation leaves you with a consistent interface
and more flexibility.
As for your comment on what you are surprised is failing,
I would not expect that to work. Which manpage led you
to expect that it would?
- You are totally right on the matter of the
inner/outer naming convention.
Get bitten often enough and you become sensitive to
potential confusions in names. :-)
The real issue here is the same one which makes it hard
for programmers to find their own bugs. You need to
step out of your own pre-conceptions of how you are
supposed to be working and thinking and see the problem
from what another person's PoV is likely to be. This is
frequently much easier for another person to do...
- And ++ for the join( $_, @_) stuff. I never would
have dared to use it. But of course $_ and @_ have
- I I removed the BEGIN blocks. Is this something for
the manpages (perldoc perlmod)?
Well it is something that I know because I looked in some
detail at Carp and Exporter a while ago. While the
principles of what happens when are documented, I don't
think that the conclusion is stated anywhere. I certainly
had to learn it by reading and thinking through the code.