For me, one of the essential features of command-line facility is the pipe. Notice what sorts of operations you need to use often on command lines (sorting, grepping, extracting one or more columns from a list of lines, doing string substitutions, etc); usually you'll find a unix utility that does just one of these things quite well, with a lot of flexibility in how you can do that one thing.
in reply to Resource for command line "one-liner" interpretation
If you have a particular need that is not easy to do with an existing tool, write an simple tool in perl to make that operation easy on the command line, and put it in your PATH. A typical situation is: I want to locate all files whose names contain "x", and whose size is greater than 10240 bytes, and determine the total space consumed by these files. (This could certainly be done completely (and fairly easily) in perl, but using standard command line tools in combination with perl makes it even easier (a lot less typing) and offers a lot of flexibility, esp. if your command-line-interface is like bash, and makes it possible to recall, edit and re-execute earlier commands:
find . -name '*x*' -type f -printf '%s\n' | perl -ne 'chomp;$s+=$_ if(
I found myself needing to sum columns like this quite often, in many different situations, so I wrote a perl script to do just this (with options for flexibility), reducing the above example to:
In general, if I'm writing "perl -e '...'" or "perl -ne '...'" etc on a command line, I'm not using a system call within that perl script -- it's quicker (less typing) to use the command line for running unix tools, and to pipe their output to a perl script when necessary to accomplish other things.
find . -name '*x*' -type f -printf '%s\n' | sumcol -min 10240