in reply to Perl vs. Python: Looking at the Code
Hey mothra, I guess I know why you haven't been to the local Perl
monger meetings in a while ... :-)
Picking out individual bits of syntax as a basis for comparison is
simply fraught with problems. All the more so when your comparison
boils down to counting characters typed in each fragment. If you like
Python better than Perl, that's fine, I think Python is a fine
language too. If you think it is easier to learn, easier to maintain,
and/or simply easier to think in, that's just fine too. But I cannot
fathom choosing to base so much of your debate on a few tiny
instances of being able to type a few less characters in one or the
other.
However, if you are really interested in make that a comparison
issue, please do so with something other than isolated fragments. Anyone
can play that game --- no one wins, and what's more, no one learns.
So, though I haven't touched Python in quite some time, and I never
had more than what I would call passing acquaintance with it (I
implemented a few things to see how I liked it), I decided to do a
simple, but non-trivial program in each for comparison. I chose to
implement a wc like program: read from either STDIN, or from files
passed in on the command line, print out the line-count, word-count,
and byte-count of the input --- if multiple files on the command
line, print out each separately, and a total at the end. The output
should be similar to the output of wc. Each program should process
the inputs one line at a time (no slurping in whole files, we don't
know how big they'll be or how much memory the user has). Below is a
base run of wc on my python version, my perl version, and a roughly
10Mb text file (I concatenated the jargon file a handful of times),
followed by timing runs of the python and perl version with the same
input (stdin tests not timed):
# wc baseline output for comparison:
~$ wc perl_wc.pl pyth_wc.py large.txt
19 58 492 perl_wc.pl
25 96 673 pyth_wc.py
208048 1675832 11021496 large.txt
208092 1675986 11022661 total
~$ cat perl_wc.pl pyth_wc.py large.txt | wc
208092 1675986 11022661
# python version 2.0.1
~$ time ./pyth_wc.py perl_wc.pl pyth_wc.py large.txt
19 58 492 perl_wc.pl
25 96 673 pyth_wc.py
208048 1675832 11021496 large.txt
208092 1675986 11022661 total
real 0m31.360s
user 0m31.030s
sys 0m0.090s
:~$ cat perl_wc.pl pyth_wc.py |./pyth_wc.py
44 154 1165 <stdin>
# perl version 5.6.1
~$ time ./perl_wc.pl perl_wc.pl pyth_wc.py large.txt
19 58 492 perl_wc.pl
25 96 673 pyth_wc.py
208048 1675832 11021496 large.txt
208092 1675986 11022661 total
real 0m7.450s
user 0m7.240s
sys 0m0.090s
~$ cat perl_wc.pl pyth_wc.py |./perl_wc.pl
44 154 1165 -
Before I post the code for both, I will state that I simply thought of
a rough and ready algorithm first, then coded each one --- not trying
to use tricks or shortcuts (though I did remove blank lines from each when
finished). Well, that's not entirely true --- I also inlined the
variable initializations in the python version, and I normally
wouldn't do that in python code ... it just kind of freaks me out
without parentheses :-) Also, to be fair, here's the byte count for
each one with all whitespace stripped out entirely (I mean, extra
indentation in the python version doesn't really equate to extra
typing, auto-indent handles much of that): python stripped: 437; perl
stripped: 362, difference = 75 characters. And I'll certainly grant
that there are likely common idioms in python I am unaware of that
would shrink that difference further --- the little map-lambda thing
was just what sprang to mind for dealing with either STDIN or command
line args, perhaps there's something more obviously magical like Perl's
<> operator.
Frankly, I'm unconcerned about the difference in typing. Both
versions were easy to code, and seem to me to be easy to read. But
the difference in speed of basic I/O and text handling does seem
significant to me (I know, if I was *really* concerned about speed
I'd use C ... but I'm also concerned about ease of programming and
development time, so if Perl and Python are on relative equal
standing there, the 4-fold speed difference is definitely a factor
... well, that and CPAN of course). Of course, this is but a tiny
fragment of the functionality in both languages as well ... so, make
of it what you will.
With that in mind, please feel free to enlighten me on using better
and/or more efficient python constructs. I'm always interested in
learning something new.
#!/usr/bin/python
import sys
files = map(
lambda f:
open(f),
sys.argv[1:]) or [sys.stdin]
Twords, Tlines, Tchars = 0, 0, 0
for file in files:
words, lines, chars = 0, 0, 0
while 1:
line = file.readline()
if line:
lines = lines + 1
list = line.split()
words = words + len(list)
chars = chars + len(line)
else:
print "%7d %7d %7d %s" % (lines, words, chars, file.name)
break
file.close()
Twords = Twords + words
Tlines = Tlines + lines
Tchars = Tchars + chars
if len(sys.argv) > 2:
print "%7d %7d %7d total" % (Tlines, Twords, Tchars)
#!/usr/bin/perl -w
use strict;
my $total = @ARGV > 1;
my($Tlines, $Twords, $Tbytes,$lines, $words, $bytes);
while(<>){
my @words = split;
$words += @words;
$bytes += length;
$lines++;
if (eof) {
printf "%7d %7d %7d %s\n",$lines,$words,$bytes,$ARGV;
$Tlines += $lines;
$Twords += $words;
$Tbytes += $bytes;
($lines,$bytes,$words) = (0,0,0);
close ARGV;
}
}
printf "%7d %7d %7d total\n",$Tlines,$Twords,$Tbytes if $total;
Re: Re: Perl vs. Python: Looking at the Code
by mothra (Hermit) on Apr 04, 2002 at 14:25 UTC
|
Hey mothra, I guess I know why you haven't been to the local Perl monger meetings in a while ... :-)
Well...not quite. :) I've been busy, planning on moving to Europe, trying to sell my car, etc. In January, I was in London and Amsterdam, and got together with a couple of the London.pm'ers.
My motivations are much more to do with finding the tool that lets me be as lazy as possible.
Now, quickly, on to the code (I have to actually do some work right away, heh).
First off, I was hoping to say that Python's fileinput module (its input() method specifically) was equivalent to Perl's <>, however it isn't. I've sent a message to comp.lang.python to try and understand why they work differently, because I understood them to be the same, until I tried to map it onto the wc program.
Anyways, to the code.
First off, I ran your programs on my machine (Pentium 733, 256 MB RAM, cygwin, NT4WS, Python 2.2, Perl 5.6.1), and large.txt was an 11 M file.
$ time ./wc.py wc.pl wc.py large.txt
21 58 494 wc.pl
25 96 698 wc.py
382230 1290003 11930691 large.txt
382276 1290157 11931883 total
real 0m7.922s
user 0m7.218s
sys 0m0.124s
$ time ./wc.pl wc.pl wc.py large.txt
21 58 494 wc.pl
25 96 698 wc.py
382230 1290003 11930691 large.txt
382276 1290157 11931883 total
real 0m4.484s
user 0m4.186s
sys 0m0.093s
Then, I made some changes to the Python:
#!/usr/bin/python
import sys
files = map(lambda f: open(f), sys.argv[1:]) or [sys.stdin]
Twords = Tlines = Tchars = 0
for file in files:
words = lines = chars = 0
for line in file.xreadlines():
lines += 1
words += len(line.split())
chars += len(line)
print "%7d %7d %7d %s" % (lines, words, chars, file.name)
Twords += words
Tlines += lines
Tchars += chars
if len(sys.argv) > 2:
print "%7d %7d %7d total" % (Tlines, Twords, Tchars)
With the following results:
$ time ./wc.py wc.pl wc.py large.txt
21 58 494 wc.pl
17 74 518 wc.py
382230 1290003 11930691 large.txt
382268 1290135 11931703 total
real 0m6.157s
user 0m6.046s
sys 0m0.124s
It seems you were using a fairly old version of Python. Version 2.1 sped up line-by-line file access.
So, for what point? I'm not sure, but you said you were interested in seeing a better (though I'd definitely not dare claim "best") version of the Python code, so there's my contribution. :) Also, it's worth noting that the speed differences in the example are neglible.
Update I: words = lines = chars = 0 might be slightly more idiomatic. I also would have written the map code (in the Python version) all on one line. That's a style difference, I guess. :)
Update II: Okay, I put the changes in the Python code mentioned in Update I.
Update III: And, for those who claim Python "forces" you into its own coding style, note that I could have written the map code using a list comprehension instead:
files = [open(f) for f in sys.argv[1:]] or [sys.stdin]
Python gives you more than one way to do it. IMHO it "takes away your options" in places where too many options are a Bad Thing anyway. (e.g. one way to define func parameters instead of using shift or @_ in Perl), totally eliminating any concerns about differences in {} style, because they're gone, etc.) | [reply] [d/l] [select] |
|
Thanks for the code followup --- I do like the list method you showed
rather than my lambda hack (like I said, it's been sometime since I
actually played with Python ... something around 1.5.x, it didn't
even have += back then iirc). You are right, Python has certainly
improved in speed: From 31 to 13 secs just switching to 2.2.1c2 vs
2.0.1, and then to 11 secs using xreadlines(). Cheating and reading
in whole files into memory and then working with them brought it down
to 8 secs --- but the same cheats on the Perl version took it from 7
to 3 secs. The better relative improvement in the perl cheat is
because we can get a "word" count via s/\s+//g without building a
list (after, of course, we get the newline and byte counts), I
couldn't find a way to do that in Python without building a list ---
so the len(string.split()) was the best I could do in Python). Also,
I did get a python version working with fileinput, but it was vastly
slower and has awkward semantics for dealing with individual files
while you iterate through them (ie, rather than an 'eof' test to see
if you are at the end of the current file, you get a 'isfirstline()'
test to see if you just read the first line of a new file ... this
makes for awkward logic in my opinion).
All said and done, although I'm not interested in relatively
small differences in the number of characters --- one of your
strong concluding statements in your original post was:
The points I've shown above are concrete examples of why, even with
best coding practices, character for character, and due to language
design issues you will save characters in Python
And I think, once you look at the context of actually writing
programs, rather than syntax fragments, your statement won't really
hold up. Although, perhaps now others may see that Python isn't
necessarily as verbose as it is often made out to be. Your updated
version looks quite nice :-)
As for your comment that the circa 3:2 speed differential is
negligible, I suggest that perhaps depends more on application domain
and the kind of work you usually do. Further, the speed difference
can be much more significant --- using regexen appears to be much
slower in Python. Example: A simple grep script (takes a pattern,
reads stdin, prints lines that match); using a pattern of
"a.*e.*i.*o.*u" on my /usr/dict/words file (find all words containing
the ordered (not necessarily contiguous) sequence of vowels). The
Python version took 7.5 secs, the Perl version took 1 sec, and the C
grep on my box took 0.2 secs --- incidentally, my words file is
non-standard and contains 263,533 entries, of which 47 match the
pattern given. For myself, this renders your 'if the languages were
equal on every other count' qualifier somewhat moot.
As for module documentation --- doc strings are nice for what they do,
but rather limited. In fact, the primary documentation for Python and
its libraries is a set of LaTeX files. Perl's POD isn't as flexible
or as powerful as LaTeX, but it is simple and it is embeddable, which
are pretty good properties, and provides a standard documentation
model (and utilities) for all of Perl and its modules.
Some of the other points you raised are valid: a standard language
reference capabable of supporting mulitple implementations can be a
good thing versus just a reference implementation; fewer rules and
fewer styles can certainly help beginners (though can also be
constraining to experienced programmers); Python's instrinsic OO
model is simpler and cleaner; Argument passing in Python is nicer.
However, Perl6 looks poised to address most of these, though I don't
expect to see any kind of release before summer 2003. Something you
didn't mention is that Python ships with a pretty sizeable library
--- although CPAN remains unmatched in any language.
Anyway, perhaps I'll see you at the next PM meeting and we can
follow this up over a beer or three :-)
| [reply] |
|
>>>
Python gives you more than one way to do it. IMHO it "takes away your options" in places where too many options are a Bad Thing anyway. (e.g. one way to define func parameters instead of using shift or @_ in Perl), totally eliminating any concerns about differences in {} style, because they're gone, etc.)
>>>
See you are missing it again the GOOD-THING as invented by python and/or perl designers may not be the good-thing for you... that is the main difference in Perl u take the responsibility to not screw the things in Python u do but not so much ... and the drawback of the python way is u loose the freedom of expressing yourself better...
Again this may be is what u want and it is OK. But perl-ers don't like this they like freedom.
| [reply] |
|
That's the same BS agrument (paraphrasing chromatic here IIRC)
people always throw out about Java.
It doesn't matter how hard language designers try to stop people from writing bad code, people still continue to amaze them with swill.
| [reply] |
|
|