Google Spreadsheet Distributed Agent System on Sep 30, 2009 at 11:59 UTC | by dmlond |
This object sets up a system to allow a user to configure a set of scripts to use a single Google Spreadsheet as a control panel when running across many servers. See The RFC for more information. |
File Backup Utility on Jul 26, 2009 at 03:04 UTC | by misterMatt |
I wrote this utility to assist me in backing up files from my hard drive to an external drive. This is my first perl script!
I tested this on the latest activeperl installation on windows vista. This script requires File::Copy::Recursive.
If anyone has suggestions on how to improve this script either by code improvements, or feature additions post your ideas here! I would appreciate it. Sorry about the formatting. |
Simple r-copy style backup on Jul 07, 2009 at 16:30 UTC | by gman |
Simple script I used to back up my Laptop before lease exchange. Was not looking to reinvent the wheel, saw rcopy after I wrote this and rcopy requires rsync. I was running XP with activestate perl, so seemed just as easy to write something quick that would do the job.
update: Thanks graff for pointing out the error I had with mkpath
eval { mkpath (["$copy_to/$dir"]); };
die "Could not create Path $copy_to $dir $@" if($@);
|
mkdir_heute on May 25, 2009 at 21:14 UTC | by mamawe |
I use this script every day to create directories, have them organized, change between them and do not think to much about it.
I use it in a shell alias like this
alias cdheute='cd `mkdir_heute`'
Actually it used to be a module and the script but for simplicity and posting it here I have slurped the module into the script.
|
xcol on May 17, 2009 at 19:00 UTC | by sflitman |
A simple text-based column extractor for use in Unix pipelines |
Data Sampler (Extract sample from large text file) on Mar 06, 2009 at 21:12 UTC | by roboticus |
I frequently find the need to test programs on *real* data. But some of the datasets I have to deal with are rather ... large.
So this program lets me generate much smaller datasets to develop with.
Update: 20090306: I edited the first print statement to remove the trace information. |
Fix CPAN uploads for world writable files on Dec 21, 2008 at 22:33 UTC | by bart |
CPAN refuses to index tarballs with world writable files, a problem most commonly encountered by people creating CPAN distributions in Windows.
This script will fix the file modes of the files directly in the tarball. Run it right after you created the tarball, but before you upload to PAUSE.
Run with -h or --help to see allowable command line options. You need at least:
- A filename
- The -i option to replace the file or the -o option to save it as a new file
|
svn tk diff on Oct 25, 2008 at 10:36 UTC | by casiano |
I use "svn" command line for most of my work, but I
very much like graphic diff over "svn diff". This program
uses "svn export" to get a temporary copy of the file
and then "tkdiff" or whatever program you specify to
present the differences between your working
copy and the one in the repository |
cleanzip on Oct 20, 2008 at 05:33 UTC | by sflitman |
This is a quickie script to clean infected zip files using clamscan. It's definitely meant for unix/linux platform and expects clamscan 0.94 which with the indicated switches will print out each infected file on a line by itself with whatever virus it found. Note that this script will also delete zipped emails or mailboxes which clamscan identified as containing Phishing, etc., as it does not distinguish what type of unwanted byte sequences are reported by clamscan. This is to get around a deficiency in ClamAV noted by many that it does not identify the actual bad actor(s) in an archive, just that the archive as a whole is infected (and not multiply-infected, which is of course possible). |
column formatter on Oct 18, 2008 at 17:36 UTC | by sflitman |
print two or more files in nicely spaced columns |
dump binaries on Oct 18, 2008 at 17:29 UTC | by sflitman |
rolled my own version of od using my favorite tool, Perl! dump is aware of the terminal width using Term::Size if available |
splicepath on Oct 13, 2008 at 22:00 UTC | by casiano |
Like Perl splice operator but works on
PATH-like environment variables, i.e.
lists whose elements are separated by colons |
Remove an Installed Module on Oct 13, 2008 at 21:25 UTC | by casiano |
Remove a installed Perl distribution |
Using Linux::Inotify2 with POE on Oct 07, 2008 at 15:02 UTC | by jfroebe |
The examples in Linux::Inotify2 are for Event, Glib::IO and a manual loop. With the help of rcaputo, tye and Animator, I was able to get Linux::Inotify2 to work with POE. :) |
cvs wrapper with ssh-agent on Sep 19, 2008 at 15:02 UTC | by jacques |
I wrote this cvs wrapper because we were using cvs over ssh and I didn't want to keep logging in each time I invoked cvs. If you are doing the same thing, you might find it useful. One thing to note is that if someone has root, it is possible for them to get your password, since ssh-agent keeps it unencrypted in memory.
|
Trash temporary files on Aug 22, 2008 at 00:27 UTC | by bruno |
BACKGROUND
I am a little obsessive with the tidiness of my home folder. I cannot stand seeing loose, uncategorized files scattered everywhere (let alone in the Desktop -- the horror!). But more often than not, i run across files that do not really fit into any particular category, and I also do not know if I'll want to store them permanently or not.
So I have a ~/tmp folder in which I toss "the garbage" there, and every month or so I take a look at it and see what gets deleted and what gets "saved" and categorized.
THE SOLUTION
So I wrote this little script (I think of it as one of those robot-housewives from The Jetsons) that looks into my ~/tmp folder every hour and sends those files that are N days or older (measured as days upon arrival to the ~/tmp folder) to the trash. This gives you
a certain time window in which you can re-evaluate the usefulness of
that file and rescue it from oblivion or not.
|
Analyzing Apache memory usage on Jul 18, 2008 at 15:09 UTC | by Azhrarn |
After a day of trying to figure out why one of my web servers was locking up, I found that it was using a bit too much memory. But I had no idea how much, and Linux memory reporting is a bit arcane at best. Especially with something like apache + mod_perl/php using shared memory pools. So after some analysis, I came up with the included script.
|
distr - show distribution of column values on May 23, 2008 at 13:45 UTC | by Corion |
This program returns a quick tally of the different values for a column. My primary use for this program is to find out the most common date value in a file, to rename that file to that date.
It is also very convenient to use this program to get a quick overview over the distribution
of lengths, especially for numbers.
Currently, I'm "confident" that I'm picking the right value as the maximum value if the value occurs in at least 60% of the rows of the sample I'm taking. This has shown to be sufficient, but better would be an estimator that determined the size of the sample or expanded the sample as long as there was not enough confidence in the "modus". |
Regenerating grub.conf on Feb 28, 2008 at 21:00 UTC | by Tanktalus |
Automatically modifying grub.conf when inserting a new kernel or removing an old one on linux can be difficult. Even just modifying it in vi can be annoying. So, after getting some advice from other Linux users, I settled on the idea of regenerating it.
When I add a new kernel, I always rename it to /boot/kernel-$version-gentoo$release (where $release =~ /-r\d+/ or it's blank) to make it easy to view. So, using that personal convention, combined with a bit of Template Toolkit, I get this script. I think all my personal conventions are at the top of the script, or in the template at the bottom.
Hope this helps other linux users out there that may be building their own kernels (and thus modifying grub.conf themselves). |
pnt2bmp on Feb 10, 2008 at 17:14 UTC | by shotgunefx |
Found some of my old drawings I did on my Tandy 1000 using Tandy Deskmate circa 1989, nothing existed to view them or convert them. I was able to figure out the format (for my version of Deskmate anyway), this will generate a BMP from the PNT file. |
art2bmp on Feb 08, 2008 at 09:38 UTC | by shotgunefx |
Converts obscure ".art" files that were generated from "VGA Art Studio" into bmps. If you have a ".art" file, it is almost certainly a different format.
A quick hack, but it works. |
histogram on Jan 20, 2008 at 15:26 UTC | by polettix |
histogram [--usage] [--help] [--man] [--version]
histogram [--include-zero|--zero|-z] [--min-data-points|-m <num>]
[--noise|-N <threshold>] [--numeric|-n]
[--percentual|-p] [--step|-s <step>] [--tail|-t <length>]
# Generating histogram's data
shell$ grep 'interesting' file.txt | gawk '{print $3}' | histogram
# If you have numbers you can keep ordering a divide into "classes"
shell$ histogram --step 10 --numeric data-column.txt
This utility produces histograms out of input data. Every line in input
(except the optional newline) is regarded as an item, and a count
is kept for each different item. After the counting phase is terminated,
the label-count pairs are printed in output, ordered by count, descending.
This is the basic work mode.
If you happen to know that your inputs are numbers, and you care about
keeping them in order, you can specify --numeric|-n. This will make
sure that you'll have something resembling a distribution, and also that
all the gaps between will be filled (at integer intervals). If also
want 0 to be included, however far it may be, just pass option
--include-zero|--zero|-z.
Moreover, if your data are numeric, and you'd rather group them by
steps (e.g. 0-9, 10-19, ecc.) you can pass option --step|-s. Steps
start all from 0, and need not be integer ones.
|
Log Rotation on Nov 23, 2007 at 08:15 UTC | by Danikar |
A small script I wrote for work. My first script that is going to be used for anything real, so I am pretty nervous putting it into practice tomorrow.
I figured I would throw it up here and see if anyone had any suggestions and so forth.
The key thing is that so far it seems to work. Also, I know very little about log rotation, so hopefully I got the concept down heh.
I used a few `command` to archive old logs, I am sure there is a perl module out there for it, but time was for the essence in here. Any suggestions to switch that stuff out with I am open to.
Date time stamp isn't working correctly. =\ |
piglist on Nov 16, 2007 at 18:20 UTC | by hsinclai |
piglist makes a formatted, sorted list of subdirectory names and their sizes in K.
If the path names are longer than 44 characters, piglist reformats itself to output
a 132 character wide report.
piglist uses File::Find and the output of 'du', on unix/linux
|
hidiff - char highlight diff utility on Oct 31, 2007 at 00:53 UTC | by bsb |
Highlight differences between 2 files using
curses highlighting, a little like "watch -d". |
Cdrom data recovery script on Oct 29, 2007 at 04:55 UTC | by bitshiftleft |
If you are something of a code collector(like code off this website), you are probably putting it on CD's. Under Win32 everytime you drag files to the CD window you create a new session on it, and doing so that new seesion is supposed to copy the session before it. As it happens it doesn't do it reliably. Have you ever noticed dissapearing files you swore you put on it ? They are still there in the previous sessions. You only get to view the last session with the Windows Explorer. I bought a used copy of Kaspersky's book($5 with CD) "CD Cracking Uncovered".
I used two utilities from it to recover two of my CD's.
The last chapter deals with CD recovery . These utilities read the CD at the raw sector level. The script below uses the SPTI interface(Admin) , but can easily be changed to the ASPI interface(non-Admin). Yes, there are utilities out there you can buy,
but when you realize that recovery is an art(one size does not fit all), its nice to have a freebie you can modify to your own needs that comes with source code. |
Maple worksheet (lists) to XLS converter on Oct 10, 2007 at 13:16 UTC | by mwah |
This script reads a (exported to plain text => .txt)
Maple worksheet (.txt), extracts all 2D number lists
(Arrays [x,y]) of the form
'list:=[[ ...'
and
'list := [ ...'
and writes them into an excel worksheet
with column headings named after the lists.
If applied to a unconverted worksheet (.mw)
only input lists are extracted (output lists
will be usually generated by Maple via evaluation
of functions or expressions.
|
GPLifier on Sep 28, 2007 at 17:26 UTC | by Minimiscience |
This script can be used to apply the GPLv3 copyright notices to a specified source file. Simply edit lines 10 & 11 to use your name and (if desired/needed) the path to your local copy of the GPL, and then run it with the source files as arguments. If a program consists of multiple source files, use the -n and -d flags to specify a name & description for the whole program. If you need to skip some number of lines in each file before inserting the notice, use the -h flag. To copy your local version of the GPL to a specified directory (default '.'), use the -c flag. |
podinherit - Imports pod from superclasses on Aug 20, 2007 at 18:34 UTC | by rvosa |
DESCRIPTION
When object-oriented perl classes use inheritance, child classes will have additional methods not immediately apparent to users unfamiliar to effectively
navigating perldoc and inheritance trees. For example, IO::File inherits from IO::Handle, and so an IO::File object "does" anything an IO::Handle
object does.
Novice users are sometimes confused by this, and think that APIs are more limited than they really are (on a personal note: I found this to be the case when bug reports came in that some object no longer had the "set_name" method, when really I had re-factored it into a superclass).
This script remedies that by analyzing a class (provided on the command line), recursing up the class's inheritance
tree, collecting the methods in the superclasses and importing the pod for the methods in those superclasses. The resulting concatenated pod is written to STDOUT. That output can then be re-directed to a file, or formatted, e.g.
by doing:
podinherit -class Some::Class | pod2text | more
Module authors might use this script during the packaging of their release by doing something like:
podinherit -class Some::Class >> Some/Class.pm
IMPLEMENTATION
This script contains a subclass of Pod::Parser, which implements a stream parser for pod. The appropriate documentation for superclass methods is identified by the "command" method, which takes the following arguments:
my ( $parser, $command, $paragraph, $line_num ) = @_;
To recognize pod, the method name needs to be part of a $paragraph start token,
e.g. to find pod for 'method', permutations of the following will be recognized:
=item method
=head1 method()
=item method( $arg )
=item $obj->method( $arg )
Or, specifically, anything that matches:
/^(?:\$\w+->)?$method(?:\(|\b)/
I.e. an optional object reference with method arrow ($self->), a method name, and an optional opening parenthesis or token delimiter \b, to be matched against
the $paragraph argument to the C<command> call in subclasses of Pod::Parser. |
scissors - divide an image in sub-images for easy printing on Aug 02, 2007 at 14:45 UTC | by polettix |
This script helps in dividing an image into blocks that can be easily printed
on a normal printer instead of an A0 plotter. Input images are divided into
tiles whose dimensions can be established quite easily and flexibly.
You can access the full documentation using the --man option. |
join - join two files according to a common key on Jul 12, 2007 at 15:16 UTC | by Corion |
A counterpart to part, it allows you to join two files side by side according to common values. This is similar to the join UNIX command except that the join command expects the input files to be sorted according to the keys, while this program will slurp the second file into a hash and then output the result according to the order of the first file.
Optionally (and untested) it can use a tied hash as on-disk storage in the case that the storage for the files is larger than the available RAM. |
File splitting script on Apr 21, 2007 at 13:12 UTC | by Alien |
A simple script that can be used to split files. |
DeathClock on Feb 21, 2007 at 21:48 UTC | by bpoag |
DeathClock attempts to determine how much time remains before a given filesystem has zero free space remaining, based on how quickly existing storage is being utilized. It will send a panic message via email to one or more recipients depending upon how confident it is that death (zero free space in the filesystem) is imminent. DeathClock can be used as a monitoring tool as well, spitting out predictions of woeful and untimely filesystem demise at user-specified intervals. It can either be run as a one-time gauge, or set to monitor a filesystem constantly. DeathClock is a morbid script that lives a mostly solitary life, with the possible exception of his partially mummified mother, in a gothic-inspired home on a hill overlooking a motel. It enjoys taxidermy, quiet dinners with the motel guests, and attacking unsuspecting customers while they shower.
Example output:
# /usr/local/bin/deathclock.pl /prod 37 1 1 0 foo@bar.com
DeathClock: Starting up..
DeathClock: Collecting 37 seconds of growth information for /prod. Please wait......................................
DeathClock: 85094.79 MB remaining in /prod. Estimated time of death: 7w 5d 9h 56m 9s.
|
part - split up files according to column value on Feb 07, 2007 at 09:44 UTC | by Corion |
I often have to split up text files for consumption in Excel. A convenient way of splitting up a file meaningfully is to split it up by the value of a column. The program does this by accumulating the input into a hash of arrays keyed by the column value.
There is an awk oneliner (I'm told) that circumvents the memory limitations this program encounters:
awk -F '{ print $0 > $3 }' FILES
If you want something like this program in a module, see List::Part
Update: Also see join - join two files according to a common key. If you need one, you'll likely need the other too.
Update: v0.03 now can part according to more than one column.
Update: v0.04 now can output multiple header lines into every file.
Update: v0.06 fixes two errors: The column at the end of each line couldn't be used (well) as the key column. Header lines are now actually printed.
Update: Now there also is a Github repository for the case that you want to submit a patch.
Update: The code is now also on CPAN as App::part. |
Excel2Text on Dec 18, 2006 at 21:58 UTC | by stonecolddevin |
Simple conversion script for Excel to a pipe ("|") delimited text file. Takes two arguments, the filename (can be relative) and the directory to save the text file to. |
venn-list: produce union of histograms on Dec 17, 2006 at 03:19 UTC | by graff |
I needed this in order to assemble a word counts from 8 different sources of text, keeping track of which words came from which sources, and what the overall word frequencies were. So simple, yet so useful (the POD is longer than the code itself). |
MD5 Cracker on Nov 15, 2006 at 18:59 UTC | by Alien |
Simple script if you have a md5 hash and want to crack it ! |
lesmets.pl on Nov 10, 2006 at 08:14 UTC | by revence27 |
lesmets.pl helps you put accents on characters when you, like me, can't get a keyboard that allows you to. The POD even has instructions on how to extend Nautilus, the GNOME file manager, with lesmets.pl.
The name comes from French. Part of the French for "(if) you put them", and churned a bit. Get the POD documentation out with pod2html.
Um, also I have noticed it doesn't treat all STDINs the same way. But it should run good (it does on my Ubuntu). If it doesn't, patch it -- the code is the clearest you'll find in these Code Catacombs. |
Module Finder on Oct 18, 2006 at 15:10 UTC | by innominate |
I've been having problems with some cpan installs. You know the deal. They install and test correctly, but something just doesn't work right. *cough* Needs more testing! *cough*
So, in my haste to track down a few of these bugs, I wanted to know where the module(s) were stored. I threw together an extremely simple, but rather useful little script. (It was originally a one-liner, hence the anon sub and the speedy ternary conditional.)
It's definately not fancy, but it does its job gracefully.
IMO It's pretty self explanitory, but the jist is that it does a simple ignore-case regex over all dirs and subdirs in @INC. When it gets a hit, it spits it out.
(Edited to clean up formatting!) |
linked-port: find given linked libraries ("shared objects") in FreeBSD Ports on Oct 14, 2006 at 07:34 UTC | by parv |
This is a preliminary version -- with/ output via Data::Dumper
and lacking POD (: "it's all in code", see region around
GetOptions()) -- to find linked libraries in files related to
FreeBSD ports.
This came about due to recent OpenSSL security advisories,
necessitating rebuild of ports which were linked to old libraries.
Dmitry Marakasov in message
<20060907181108.GB90551@hades.panopticon> on freebsd-ports
mailing posted ...
for port in `pkg_info -oaq`; do
grep OPENSSL /usr/ports/$port/Makefile >/dev/null &&
echo $port;
done
... which seemed not very reliable as that would miss any port which
does not have "OPENSSL" in its Makefile. "security/nss" is such a port
used by Firefox. So, I decided to just use ldd(1) directly on
the files installed ...
# listpkg (used in filepkg): http://www103.pair.com/parv/comp/src/per
+l/listpkg-0.22
# filepkg: http://www103.pair.com/parv/comp/src/sh/filepkg
filepkg . | egrep '^/.*(bin|libexec)' \
| xargs -I % ldd % 2>/dev/null | less -p'(crypto|ssl)'
... output of which was rather tiresome to search through, and that
was enough to open up my $EDITOR & flex some perly muscle.
|
Move/merge directories on Sep 14, 2006 at 23:15 UTC | by diotalevi |
This allows you to merge two identical-ish directory trees. It won't overwrite any files if there's a conflict. |
diotalevi's grep on Sep 14, 2006 at 22:29 UTC | by diotalevi |
This grep is much like everyone else's perl reimplementation of grep. It's only distinguishing features are automatically looking inside bzip2, gzip, zip, and tar files. It borrows the pretty formatting used by petdance in ack. This started life as an improved version of the grep that comes with the Solaris which isn't recursive. |
cutf - cut by field name on Sep 14, 2006 at 22:23 UTC | by diotalevi |
Print selected parts of lines from each FILE to standard output. Selects parts by field name unlike /usr/bin/cut which uses column numbers. |
cksum contents of a tarball on Sep 14, 2006 at 22:17 UTC | by diotalevi |
Produces cksum info on the contents of a tarball while leaving the minimum files extracted at any given moment. |
uhead: "head -c" for utf8 data on Sep 01, 2006 at 03:25 UTC | by graff |
This simple command-line utility does for utf8 text data what GNU "head -c N" does for ASCII data: print just the first N characters of files (or STDIN). Since Perl's built-in "read" function is able to read characters (rather than just bytes), this a pretty trivial exercise. But I wanted to post it anyway, because it's a nice demonstration of a fairly complex process (handling variable-width characters) being made really simple. |
Color diff for terminal on Aug 12, 2006 at 12:07 UTC | by polettix |
Update: there's a stripped down Perl 6 version, check it out!
This will help you browse diff-like outputs (either plain or unified) on the terminal, colorising differences. It will also be able to invoke other programs that produce diff-like output (like some commands for version control systems).
The approach is different from other colorising script-s:
- I'm not using Algorithm::Diff, and
- I'm using terminal colorising capabilities, not HTML
Regarding the first bullet, it's better than it may look like. To diff two files I'm either fork()-ing a diff process, or requesting you to do so, but this is intended because it allows to access (nearly) all options that diff has. Moreover, this script works with all diff-producing programs (if the format is compatible with diff, either in the "plain" style or in the "unified" one), while Algorithm::Diff would be restricted to the two-files-diffing world (more or less).
Regarding the second bullet, I needed something to quickly show me the differences on the terminal - just like diff (or cvs diff, or svk diff, or...) do, but with colors to make them outstand. The drawbacks are many, among which:
- you have to use a terminal that works with Term::ANSIColor
- it's mostly useless for very long diffs
Again, this was no problem for me because I have such a terminal and the diffs I check are usually less than a few (terminal) pages.
As a final note, if you have symbolic links you can get the most out of the script by:
- installing it as colordiff somewhere inside $PATH
- make a symbolic link to it named cdiff, again inside the $PATH
Invoking the script when it's called cdiff basically turns it into a drop-in replacement for diff(1), so that you can:shell$ cdiff -u file1 file2 # with colors, instead of
shell$ diff -u file1 file2 # without colors, or
shell$ diff -u file1 file2 | colordiff # wow, way too long!
I know that this name-based behaviour change is generally frowned upon, but I think that in this case the advantage is self-evident.
Be sure to check the documentation for hints about using this with version control systems, like CVS or SVK. I hope you'll find it useful! |
lspm — list names and descriptions of Perl modules in a directory on Jul 16, 2006 at 05:48 UTC | by Aristotle |
Remember pmdesc2 - lists modules with description? It’s a script that lists any or a subset of the modules you have installed, complete with the version number and a description, inspired by Tom Christiansen’s pmdesc, but without a number of its annoying flaws, with much higher speed and far cleaner code.
This time around, I added a bunch of options and DOS-newline translation to address problems brought up by Fritz Mehner. In the process, I also cleaned the code up even further and added POD and proper --help etc by way of the inimitable Pod::Usage.
Update 2006-07-16T11:03+0200: fixed a minor oopsie with --align-cont. |
Exchange to Postfix Firewall Configuration on Jun 29, 2006 at 16:00 UTC | by madbombX |
This script will pull all users SMTP addresses from your Active Directory (including primary and secondary email addresses) and list them in the format "user@example.com OK" which Postfix uses with relay_recipient_maps. It will also automatically create a mynetworks file, transport file, and relay_domains file to ensure all information is properly included. Additionally, if you have amavisd installed, you can specify whitelist and blacklist information for the information retrieved. You can also include senders and recipients not on either list and assign them a score. Don't forget to restart postfix after running this script.
Project Page: http://eric.lubow.org/projects/getcrr.php
There are links to the latest version of the file and the latest version of the config file located on the project page. |
perltoxmi on Jun 16, 2006 at 19:03 UTC | by g0n |
Rough and ready way to convert oo perl code to xmi for uml class diagrams, for import into Argo/Rose/etc etc. |
psh (perl testing shell) on May 11, 2006 at 18:16 UTC | by jettero |
I'm sorry to say, I've written another perl shell. This one is designed to help with the development of perl programs by giving you a place try things out.
I've been using it to support my own coding for quite some time now and a friend encouraged me to publish it. I wrote this one to compete with hilfe specifically, though python and ruby have similar devices.
UPDATE(2/14/08): This has evolved a bit since my original post May 11, 2006 at 14:16 EDT.
UPDATE(8/28/8): More evolution. There is now support for paging in less, shell forks, config editing, and assorted perldoc forks. |
script-starter on May 02, 2006 at 19:33 UTC | by polettix |
A template for creating new scripts, much in the Module::Starter spirit, together with a script that filters the template to actually create the new script. Ok, it's simpler than what I've described!
The script has documentation, but it's not based on the template - just because the template evolved with time (e.g. english translation).
Update: followed clever suggestions from chanio.
Update: followed clever suggestion from http://perlbuzz.com/2009/11/the-horrible-bug-your-command-line-perl-program-probably-has.html. |
md5sum check on Mar 23, 2006 at 05:57 UTC | by w3b |
When we have extraneous user in computer, we are in state of emergency. Bad guy can modify file like /etc/ssh/sshd_config or /etc/passwd... I don't want to check checksum every day, so i write script which do it for me :) In .sumlog we write checksum::patch to file, and that's all |
Splitted Zip on Mar 04, 2006 at 02:42 UTC | by polettix |
Sometimes email puts a hard limit to the size of the files we can send.
In these occasions, compressing comes handy, because it reduces the
size of the data, but it could not be sufficient. Many tools allow
the production of a splitted ZIP file, but this approach, while general,
requires a higher knowledge on the side of the recipient, that is
obliged to save all chunks in a directory. Many users simply don't
want to catch this simple concept, and insist on double-clicking in
the file they receive.
This is where split-zip.pl comes to the rescue. If it can. Its purpose
is to arrange the files to be sent in order to produce multiple ZIP
archives, each of which remains valid and self-contained. Thus, the
casual user double-clicking on it will be happy and will see some
of the files. Of course, this approach fails miserably if there is the
need to send a single, huge file - you're stuck to train your users
better in this case.
Note: I only tested it in a few cases, be sure to read the disclaimer at the end!!! |
Ppushd - emulate pushd and popd commands of the shell on Feb 28, 2006 at 19:32 UTC | by ambrus |
This script emulates the shell functions pushd, popd, and dirs. It stores the directory stacks in a data file ~/.ppushd.dat.
It needs some help from the shell because otherwise it won't be able to change the directory. Save the script with the name ppushd-bin somewhere in your path, make it executable, and add the following commands to your bashrc to be able to use it.
pdirs() { cd "`ppushd-bin dirs "$@"`"; }
pinit() { cd "`ppushd-bin init "$@"`"; }
ppushd() { cd "`ppushd-bin pushd "$@"`"; }
ppopd() { cd "`ppushd-bin popd "$@"`"; }
pinit
This script is dedicated to demerphq.
Update: it might be better to implement this fully as a shell script without perl.
Update 2016-01-06: See also "I'm trying to write a script that will change directory (or set a variable), but after the script finishes, I'm back where I started (or my variable isn't set)!" in Greg's bash FAQ |
jfind - java class search on Feb 10, 2006 at 21:15 UTC | by vladb |
Searches for classes in a set of jar files that match a given pattern.
Usage:
jfind -help -s <pattern> <jars ...>
Class <pattern> could contain /.
E.g.: javax/ejb/EJBObject
the script will convert all / to .
Usage Examples:
jfind digester *.jar
jfind -s digester *.jar
Search classpath...
jfind org.foo.Bar /usr/lib/foo.jar:/usr/lib/bar.jar
|
Xcalcfin on Jan 05, 2006 at 19:57 UTC | by smokemachine |
financery calculator |
Comment on Dec 28, 2005 at 18:13 UTC | by smokemachine |
Comment your code |
Saving digital camera photographs on Sep 30, 2005 at 23:37 UTC | by Hue-Bond |
I've been having compact flash corruption problems lately so I decided to keep my CF as empty as possible and format it regularly. I've prepared this little script that keeps an eye on the system log to see when I plug the camera and then moves the pictures to a safe place. It needs to run as root but the real work is made in a separate, unpriviledged child. This way, all I have to do is plug the camera, watch the syslog for a while and unplug it again.
I tried quiet hard not to reinvent any wheel. If I did, please pull out your flamethrower. |
expanded "ls" on Aug 10, 2005 at 03:14 UTC | by blahblahblah |
wrapper for "ls" that supplies default args and paged output
I got tired of typing "ls -halt ... | less -E". Now I just run this script, which I've named "sl".
Usage:
sl
sl -R
sl -R *.pl *.txt
|
Autoresponder automator on Jul 27, 2005 at 10:04 UTC | by jkva |
UPDATE: Working on quite a bit of improvements now; using the current version is .ill-advised.
UPDATE: Fixed. I wonder if I should let it go to the next users on file access errors or die() ...
UPDATE: b10m notified me of design flaws, I am fixing it now. Stay tuned.
UPDATE: Added status message when successful copy of .forward file has been achieved.
UPDATE: Fixed a bug and added comments.
This script generates a .forward Exim filter, plus the needed .vacation.msg reply message. Those two files are standard, the .vacation.msg is changed to contain the start-and end date, so that the person recieving the reply knows when the user will be back.
I could've done it by hand, but I thought writing a script would be good exercise.
Any hints/advice/notices of bad style or simply horrid code are greatly appreciated. Criticism == learning opportunity.
It's the first script I've written in Unix (or in Vim, for that matter). Future additions include reading the user data out of a file, and removing itself from a crontab when the "last" date has been passed.
I'm thinking of reading the vacation dates out of the MySQL database that the company web-based calendar uses... |
monitor suid and world writtable files on Jul 18, 2005 at 14:44 UTC | by Anonymous Monk |
need to write a script to scan system for new suid and world-writtable files, send email about the scan result if discover one or more. It skips /home and NFS mounted directories.
the NFS mount skipping part is for solaris only. |
Transposer on Jul 13, 2005 at 02:47 UTC | by Pied |
I sometimes miss Ocaml while parsing lists...
Yesterday, I needed something that would take a list of tuples and give me back the list of all the nth elements of each tuple.
Here we go!
Update: changed the name, as graff showed out it was just a kind of matrix transposer in fact :) |
Random Password Generator on Jul 01, 2005 at 07:43 UTC | by satz |
This script will generate a 9 character random password, consisting of 3 uppercase characters, 3 lowercase characters & 3 numbers. |
Color Space Converter on Jun 13, 2005 at 15:11 UTC | by js29a |
Converts between HSL and RGB, between RGB and HSL,
CMY and RGB, XYZ and RGB.
Three elements array or ref to hash input and output.
Input and output values are normalized to 0.0 - 1.0 range. Input variables are asserted.
|
webpage update watch - used to watch event update on Jun 11, 2005 at 21:25 UTC | by Qiang |
ever read something like 'come back in few days to check ...' on webpage ? I certainly do. how do u get informed when event get posted?
I write this script to watch event update on certain webpage. it keeps the page's last update time in a plain file and comparing it with the page's current update time.
if the time doesn't match, send an email to informe me the update.
the hash to store the page info maybe little lame and can be factored into a config file with the help of Config::Tiny or others.
I set up this script as a cron job and run it daily. I have never missed the event i am interested.
this script probably only work on static webpage. |
unzip on May 14, 2005 at 23:33 UTC | by polettix |
A little utility which includes some options from Info-ZIP's unzip program (available at http://www.info-zip.org/pub/infozip/). Help message:
Usage ../unzip.pl [-l|-p] [-q] [-x xlist] [-d exdir] file[.zip] [list]
Default action is to extract files in list, except those in xlist, t
+o exdir.
If list is not provided, all files are extracted, except those in xl
+ist.
Extraction re-creates subdirectories, except when exdir is provided.
-d extract to provided directory, no directory structure.
-h this help message
-l list files (short format)
-p extract files to stdout, no messages
-q quiet mode, no messages
-x exclude files that follow in xlist, comma-separated (Note 1)
Note 1: files with commas aren't allowed yet :)
The utility is primarily intended as a quick replacement for unzip on systems where this utility isn't available. I've implemented the options I use most, like seeing what's inside the file (-l option) and extracting to a directory without structure (-d option, even if I'm not really sure of this). I also find extraction to standard output quite useful some time to time, so I put it in (-p option).
As an added bonus, you can provide a list of files to extract (default is all files) and of files to avoid to extract (-x option). Testing will be implemented in the future, if I remember...
The command line differs from that of Info-ZIP unzip because the order for the options is different. Here I expect all options listed at the beginning, then the zip file name, then the names of the files to extract (if any). That's basically how Getopt::Std::getopts works, sorry for this.
See also Create/Extract Zip Archives from #include for a bidirectional utility (but with less options for unzipping). |
dar - pure perl directory archiver on May 06, 2005 at 19:45 UTC | by Ctrl-z |
Simple directory archiver akin to tar, but works on Win32. See pod. |
detacher.pl on Apr 29, 2005 at 05:02 UTC | by enemyofthestate |
Detaches itself from the terminal and starts a program. Orgianally written to start java code that refused to go into the background. Since then I've used it on perl 'daemons' as well. Based in a trick I learned way back in my OS-9 days. |
ppk on Apr 18, 2005 at 01:37 UTC | by northwind |
Perl Process Killer (PPK)
Usage: ppk {process name, required} {iterations, optional}
|
Remote killing of processes on Apr 08, 2005 at 11:47 UTC | by PhilHibbs |
Opens a telnet session, lists processes, and prompts the user to kill each one. The code is wrapped in my version of the bat2pl framework and is saved as a .cmd file, but anyone using this in a non-MSWindows environment can delete up to the #!perl line and also the last line of the script. It also uses Win32::Console to wrap long lines in a more readable manner, but those 4 lines can be replaced with either a hard-coded value (e.g. $w=80) or the equivalent console-width-determining logic for the platform.
If your implementation of ps formats its output differently then the hard-coded value 47 may need to be adjusted for more pleasant display (hanging indent).
Also uses Term::ReadKey which is not a standard distro module. |
ICQ group chat on Mar 14, 2005 at 17:03 UTC | by dpavlin |
This small script is for unlucky people (like me) who need group chat functionality for ICQ, but don't have client which supports it.
You will need separate account for this bot. !config command exists so that you can edit YAML configuration file (to kick somebody out) and reload config without restarting script.
Latest development version (with support for buddies, logging using DBI, !last and ton of other features) is
available from Subversion repository. |
VectorSpace on Feb 25, 2005 at 15:47 UTC | by smalhotra |
I needed something to iterate over every combination of a set of points, essentially to traverse a vector space. Since I couldn't find anything readily, I decided to roll my own. It's posted here in case anyone needs to do the same (or when I need to recall it). I didn't post the base class, Object but it is essentially soomething that supports new(), set(), and get() methods. |
Solaris - change hostname / ip / default-router-ip script on Feb 19, 2005 at 22:38 UTC | by Qiang |
I got bored when I had to change the ip/hostname from time to time on solaris 7 or 8 machines, there are too many files need to be changed!
you can change ip or hostname, or do both the same time. If the new ip is on different subnet from the old one. default router ip gets changed too (like the second example). two examples of running this script.
currently this script only prints out the command it is going to perform. To use it, comment out the following line in the script.
#print "\t $f changed\n" unless (system($cmd));
script -oldip [ip] -newip [ip] -oldhost [host] -newhost [host]
script -oldip 1.2.1.1 -newip 1.2.3.100
I wish i could make this script shorter :)
UPDATE: adds \Q \E and \b
also, don't trust user input (although in this case only myself) and validate it before processing. |
IIS 6.0 Export and Import to Back Server on Feb 10, 2005 at 23:39 UTC | by LostS |
Ever needed to get your IIS metabase copied to your DR server nightly? Ever just needed to do a quick export of your IIS settings and change some stuff and put it on a new/differant system. Well I finally found out how to do it on IIS 6.0 on a Windows 2003 server. So that is what this script does.
I hope you all enjoy. |
automatic-AutoBundle.pl on Jan 31, 2005 at 00:03 UTC | by hsinclai |
Run CPAN autobundle and send the Bundle to your archive directory, I always forget so now it's in cron.. |
Find children of process (unix/linux) on Jan 26, 2005 at 16:43 UTC | by Tanktalus |
Prints out all the child processes of a given process ID. The tough part is that ps is inconsistant across unix platforms. |
Indent Perl on Jan 17, 2005 at 07:33 UTC | by dr.p |
Takes messy perl code and makes it beautiful. Doesn't matter if former indentation existed, or was done with spaces or tabs. Indent character is a space and indent size is set to 2 by default. These are easy to change near the top of the script.
WARNING: Don't use on anything other than Perl code. Unawanted results will most likely occur.
|
File Statistics on Jan 11, 2005 at 13:16 UTC | by sweetblood |
fstat [-a] [-s] [-m] [-p] [-u] [-h] file ...
-a Print files access time.
-m Print files modify time.
-s Print files size in bytes.
-p Print files permissions(mode).
-u Print files owner user id.
-h Print help(this screen).
If no options are present on the command line all statistics will be returned.
|
xls2tab - Simple MS Excel to TSV converter on Jan 10, 2005 at 19:07 UTC | by legato |
Converts XLS data into TSV format, putting multiple sheets into separate files. Output files have a .tab extension. This is particularly useful for reading XLS files on non-MS platforms, and for bulk-loading data in XLS sheets into an RDMBS.
Tested on Windows 2000 and Linux.
Module requirements: |
wmchoose on Dec 27, 2004 at 21:58 UTC | by blazar |
Naive Window Manager selection tool
Data used by the program is appended to it after the __END__ token. Embedded pod documentation describes its format for maintainers/admins (no perl experience/knowledge required). This is being actually used at a site: it's rudimentary but it works! Users seem to be satisfied.
Note that die()s and warn()ings intended for the final users are in "\n"-form, whereas those for maintainers are in the somewhat more informative "\n"-less one, so before pointing out there's an inconsistency consider that it's there by design...
Suggestions and improvements welcome. About the only feature I've considered adding myself is support for multi-line text to be inserted in ~/.xinitrc, but up until now nobody has requested it at least in the environment this is being used.
|
Input highlighter / visual grep on Dec 17, 2004 at 02:40 UTC | by Aristotle |
Inspired by pudge over at use.perl, here's a short little script you can use to highlight pattern matches in input data.
usage: hl [ -c colour ] [ -x ] pattern [ file... ] [ < input ]
You can use capturing parens in your pattern. In that case, you can supply multiple attributes separated by commas, which will be used to individually colour the submatches.
-x will supress lines without matches.
Update: fixed massive offset calculation bug, hugely simplified the colourizing routine.
Due to the semantics of the @- and @+ arrays, my first stab was a horrible monster and incredibly difficult to debug, far harder to write than it promised to be. The special entries at index 0 indicating the start and end of the entire match required terrible contortions to take into account.
And, surprise surprise, the code was buggy.
In fixing my bug, I realized that the proper special case looked almost like a common case. And then I realized that by appending a phantom zero-length match and changing index 0 to instead signify a phantom zero-length 0th match, both special cases disappear.
Lesson: when implementing the semantics turns your brain to mush, change the semantics.
For a history of the code, look at aforementioned use.perl thread. |
Remove Duplicate Files on Oct 29, 2004 at 02:38 UTC | by jfroebe |
Searches a list of directories provided on the command line and removes duplicates. It remembers previous runs (compressed delimited file) and is able to remove 'cache' entries that point to nonexistant files.
A summary is printed
Loaded 93031 entries
TOTAL files: 93030
Added files: 0
Deleted files: 0
Files not found: 0
|
Bulk file attachment extractor for Lotus Domino on Oct 14, 2004 at 19:44 UTC | by diotalevi |
A simple extraction tool for Lotus Domino applications. |
Discussion Section / Office Hours Preferences Analizer on Sep 14, 2004 at 05:15 UTC | by hossman |
This script parses a data file containing students votes for when Discussion Sections (or office hours) should be held, and generates an Excel spreedsheet containing stats on how useful each trio of sections would be (along with a complete roster of everyone who's info is in the data file)
People who aren't enrolled or waitlisted are included in roster, but their prefrences are not counted.
See Also: the CGI to generate the DATA
|
norobotlog on Sep 06, 2004 at 01:42 UTC | by quartertone |
I always look at my Apache server log files from the command line. It always bothered me to see "GET /robots.txt" contaminating the logs. It was frustrating trying to visually determine which were crawlers and which were actual users. So I wrote this little utility, which filters out requests were made from IP addresses which grab "robots.txt". I suspect there are GUI log parsers that might provide the same functionality, but 1) i don't need something that heavy, 2) I like to code, 3) imageekwhaddyawant. |
submit-cpan-ratings - upload ratings to CPAN for stuff you've used on Aug 20, 2004 at 17:09 UTC | by diotalevi |
I am posting this script here to perhaps gather any feedback about the implementation or design before I submit this to CPAN.
submit-cpan-ratings is a script which automates the process of finding the modules you've used in your code and submitting module reviews to http://ratings.cpan.org. For example, to submit a review of the modules you used in your source directory: $ submit-cpan-ratings ~/src/a_script ~/src/a_directory ~/whateverYou'll be told which modules were found, and what the versions are. As each module is checked, http://search.cpan.org and http://ratings.cpan.org will be used to find the proper module name and version. If the module you used isn't on cpan under the name you called it or if the version you're using isn't available for rating you won't be able to submit a rating. This uses the same .pause file that the L<cpan-upload> script uses for
you PAUSE credentials.
|
vimod on Aug 12, 2004 at 21:07 UTC | by tye |
Use 'vi' as your 'pager' to view module source code.
This is a bit like "perldoc -m Module" then "v" to tell your pager (if it is smart enough) to throw you into your favorite editor (but less typing, supports multiple module names, leaves the paths to the modules displayed after you exit the editor, doesn't require your pager support "v", etc.)
Requires Algorithm::Loops.
(Updated to remove mispelt "pathes". Thanks, gaal.) |
File Chunkifier on Jun 15, 2004 at 14:09 UTC | by husker |
Splits a file into N evenly-sized chunks, or into chunks with at most N lines each. Works on Windows or UNIX. Allows header or footer text to be prepended/appended to each output file. |
Comment Stripper script for unix on Jun 14, 2004 at 01:55 UTC | by hsinclai |
e.pl
invoke as "e" or "ee"
Comment stripper for unix, useful during system administration. Removes blank lines, writes output file, strips "#" or ";". Tries to preserve shell scripts.
Please see the POD |
vimrc for documenting subs on May 19, 2004 at 13:50 UTC | by scain |
This is a vimrc that will automatically create documentation and skeleton code for a subroutine. To use it, type the name of the new subroutine in a line by itself in vim, the press either F2 for a generic sub or F3 for a Get/Set sub. The result for a generic sub looks like this:
=head2 generic_sub_name
+
=over
+
=item Usage
+
$obj->generic_sub_name()
+
=item Function
+
=item Returns
+
=item Arguments
+
=back
+
=cut
+
sub generic_sub_name {
my ($self, %argv) = @_;
+
+
}
and for a get/set sub, like this:
=head2 get_set_name
+
=over
+
=item Usage
+
$obj->get_set_name() #get existing value
$obj->get_set_name($newval) #set new value
+
=item Function
+
=item Returns
+
value of get_set_name (a scalar)
+
=item Arguments
+
new value of get_set_name (to set)
+
=back
+
=cut
+
sub get_set_name {
my $self = shift;
return $self->{'get_set_name'} = shift if defined(@_);
return $self->{'get_set_name'};
}
|
intelli-monitor.pl on May 06, 2004 at 18:22 UTC | by biosysadmin |
This is a quick program that I wrote for a friend who had some flakiness on his server. It parses the output of `ps ax`, checks for vital processes, and restarts any process that are stopped. |
stopwatch on May 06, 2004 at 16:25 UTC | by meonkeys |
Simple console-based stopwatch. Assumes 80-character wide terminal. Stop with Control-C. Requires Time::Duration. |
MakeAll.Pl on May 01, 2004 at 12:58 UTC | by demerphq |
NAME
MakeAll.pl - Build, test and optionally install a module on all versions of perl
located in path.
SYNOPSIS
MakeAll.pl [options] [file ...]
Options:
--help brief help message
--man full documentation
--verbose be talkative, repeat or assign a higher value for
+ more
--install do an install in each perl if build is good
--no-run don't run test, just scan path
--scan short for --verbose --no-run
DESCRIPTION
This program will run the standard
perl Makefile.PL
make
make test
and optionally
make install
for each distinct perl executable it finds in the path (it uses some tricks to make sure they aren't dupes). On Win32 it will use make or nmake as is necessary based on whether the script itself is run under Activestate perl or Cygwin perl. On non win32 machines it will use make.
AUTHOR AND COPYRIGHT
Copyright Yves Orton 2004. Released under the same terms as Perl itself. Please
see the Perl Artistic License distributed with Perl for details.
|
VarStructor 1.0 on Apr 30, 2004 at 19:58 UTC | by Wassercrats |
Alternative to Perl's reset function, with extra features. Also could be used to print variables and their values, including "my" variables. See top comments in script.
I'll probably add an option to exclude the "my" variables, and I intend to make this into a Cpan module (it's currently in subroutine form).
This is an improved version of VarStructor. |
Copy Permissions on Apr 25, 2004 at 19:25 UTC | by BuddhaNature |
Based on Ben Okopnik's cpmod script this script takes two directories (or files) and recursively sets the permissions of files/directories that exist in both to those of the version in the first. Handy if you want to check something out of a version control system and set the permissions to those of an already existant copy elsewhere.
NOTE: The full paths of both directories/files must be used, or ~/ if in your home directory.
UPDATED: Made use of japhy's suggestion.
UPDATED: Made use of some of davido's suggestions and those in his perlstyle piece.
UPDATED: Made use of another of davido's suggestions regarding checking the matches. |
mail-admin.pl on Apr 23, 2004 at 00:34 UTC | by biosysadmin |
I wrote mail-admin.pl this afternoon in order to manipulate my server's MySQL + Postfix virtual tables. It's a little rough at the moment, but I'm already planning a smoother version 0.2. :)
Information on the structure of the MySQL tables is available here, as well as details on the Postfix configuration that I used. |
VarStructor II -- Abbreviation tool on Apr 20, 2004 at 13:40 UTC | by Wassercrats |
Similar to Text::Abbrev (Text::Abbrev output example here). I intend to use it to create unique variable names based on the first few characters of the variables' values (the elements of @Lines). Variables that are still identical get numbered.
It's already set up as a demo, so try it out and see how it differs from Text::Abbrev and similar modules.
@Lines contains demo phrases to be abbreviated. The configuration options are $Max_Length (of returned phrases), $Letters_Only (1=yes, 0=no), and $Replacement (string that replaces non-letters).
The code is full of global variables, not enough comments, and probably other bad stuff, but I know that's what everyone loves about me. I intend to fix it up. Maybe. |
grepp -- Perl version of grep on Apr 15, 2004 at 01:45 UTC | by graff |
It's the old basic unix "grep" on steroids, wielding all the regex power, unicode support and extra tricks that Perl 5.8 makes possible, without losing any of grep's time-tested elegance. |
VarStructor on Apr 14, 2004 at 22:56 UTC | by Wassercrats |
This script is obsolete. See updated script at VarStructor 1.0.
An alternative to the eventually-to-be-deprecated reset function, plus lists variables and their values. Place VarStructor in the script containing the variables. Configurable with $CLEAR and $OMIT.
One advantage over some alternatives for listing variables is that variables in the code will be listed even if they weren't "seen" during run time. This includes variables within comments, though that part isn't an advantage (bottom of my list of things to fix).
Other limitations are it doesn't handle hashes or variables beginning with "_" and spacey values will be spacey looking (not visually delimited) in the output. I heard something about variable names containing a space not working too (whatever they are). I might fix all that, depending on the response I get. |
count tables in all databases of a mysql server on Apr 11, 2004 at 06:44 UTC | by meonkeys |
This will give counts for the number of tables in each database in a MySQL server as well as provide a total number of tables in all databases in a given server. |
automatically mount/umount USB mass storage device with hotplug on Apr 06, 2004 at 18:14 UTC | by meonkeys |
This script will allow you to plug in a USB Mass Storage Device (digital cameras, flash readers, etc) and have Linux automatically mount it, as well as unmount it on unplug. Someday Linux distros will do this for you (if they don't already), but for now, this may be handy. Requires 'hotplug' (for assignment of actions on device plug-in) and 'logger' (for writing status messages to the system log, probably /var/log/messages).
Before you ask questions, read this tutorial, which covers all prerequisites besides those I've already mentioned.
This script was designed for use with my Leica digital camera, so please excuse any hardcoded values including the string 'leica'.
|
apt-topten on Apr 05, 2004 at 21:32 UTC | by pboin |
Apt-topten will help bandwidth-challenged folks make good decisions on what packages would be good candidates to remove before doing an 'apt-get upgrade'. It lists the top packages that apt wants by size, descending. |
ncbi-fetch on Apr 05, 2004 at 13:48 UTC | by biosysadmin |
ncbi-fetch will fetch sequences from the NCBI using Bio::DB::GenBank Perl
module (available as part of the BioPerl package). Each sequence is saved to a
separate file named by accession number. This program will introduce a
three-second delay in between successive requests in order to avoid placing too
much stress on the NCBI servers.
|
repinactive on Mar 10, 2004 at 19:40 UTC | by sschneid |
repinactive prints a summary report of accounts deemed 'inactive'. |
jdiff - jar/java diff on Mar 04, 2004 at 21:51 UTC | by vladb |
Compares java classes in a set of jar files and prints out a straightforward inconsistency report. The report shows missing and modified files for each jar file. |
seq-convert on Feb 23, 2004 at 20:28 UTC | by biosysadmin |
A quick and dirty program that uses the BioPerl SeqIO modules to convert biological sequence data.
seq-convert [options] input-file
options:
--input <inputformat>
--output <outputformat>
--formats
--subseq <range>
--help
OPTIONS
--input
Specifies the format of the input file. Defaults to fasta.
--output
Specifies the output format. Defaults to fasta.
--print-formats
Prints the sequence file formats available to this program.
--subsequence range
Selects a subsequence of the sequence contained in the input
+ file. Ranges should have the form x-y, where x and y are positive in
+tegers.
--help
Prints a detailed help message.
--version
Prints version information.
|
helios - updated atchange on Feb 10, 2004 at 21:42 UTC | by biosysadmin |
Helios will monitor a file (or set of files) for changes to their modification times, and execute a command (or list of commands) after the modification time is updated. Inspired by the atchange program available from http://www.lecb.ncifcrf.gov/~toms/atchange.html. I use this program in order to save myself some keystrokes while compiling java files:
$ helios --verbose StringSort.java 'javac *.java'
Watching StringSort.java, initial mtime is 1076448240
StringSort.java last modified at 1076524024.
Executing javac *.java ... done.
Or, to save myself even more keystrokes:
$ helios --multiple-cmds StringSort.java 'javac *.java','java StringSort'
It's fun to watch the output scroll by after just saving my file. :)
|
Crosslink on Feb 02, 2004 at 08:40 UTC | by BrentDax |
For a Web project using Embperl, I have to show a small, fixed set of pages with several different templates (base.epl scripts, essentially, although I used a different extension). I chose to do this by creating hard links between the content files in each directory.
crosslink.pl (or whatever you choose to call it) takes a source directory, a destination directory, and a list of files. It then runs link("sourcedir/filename", "destdir/filename") (or symlink) on each file. This allows me to do things like:
crosslink template1 template2 index.epl bio.epl links.epl images/picture.jpg |
Build Bundle releases on Jan 29, 2004 at 18:24 UTC | by gmpassos |
This script will build a release of a module and all it's dependencies. (Read the script comments for instructions). |
View last login times of everyone on your system on Jan 15, 2004 at 17:42 UTC | by merlyn |
View the last login times of everyone on your system. You may need to adjust the struct for unpacking or the location of your lastlog file. |
PDF Concatenation and Extraction Tool on Jan 14, 2004 at 21:59 UTC | by rob_au |
This is a PDF concatenation tool designed to merge PDF files or portions thereof together to a single output PDF file. The command line arguments for this tool take the form:
pdfcat.perl [input files ...] [options] [output file]
-
-i|--input [filename]
- Specify an input file for concatenation into the output file. If a single file is specified with the --page parameter, this script can also be used for extracting specific page ranges.
-o|--output [filename]
- Specify the output file for concatenated PDF output.
-p|--page|--pages
- This argument, which follows an input file argument, defines the pages to be extracted for concatenation from a given input file. If this argument is not defined, all pages from the input file are concatenated. The pages specified for extraction may be separated by commas or designed by ranges.
For example, the arguments --input input.pdf --pages 1,4-6 would result in pages 1, 4, 5 and 6 inclusively being extracted for concatenation.
|
Which version is this module? on Jan 14, 2004 at 16:19 UTC | by bart |
Sometimes you wonder what version of a module you have, and it's easy enough to come up with a oneliner to do it, loading the module and printing out the value in its $VERSION package variable. However, life can be somewhat easier still, and so I wrapped this oneliner into a .bat batch file (for Windows/DOS). So copy this one line, paste it into a new text file "version.bat" (avoid adding newlines if you can), and save somewhere in your path — I saved it in my perl/bin directory, next to the perl executable.
Use it from the "DOS" shell as:
version CGI
or
version DBD::mysql |
URL monitoring (on ur LAN or on the internet vie a proxy server) on Jan 06, 2004 at 12:34 UTC | by chimni |
This script is used to check wether a url is availaible or not .
Perl's LWP module has been used
The script has functionality for direct access as well as authenticated access through a proxy server.
The HTTP get returns a status 200 if everything is ok .
THE TIMEOUT VALUE HAS BEEN SET AS 60 SECONDS IF URL DOES NOT RESPOND WITHIN THAT TIME IT WILL BE CONSIDERED A FAILURE
Pardon the lack of POD,used a general commenting style.
|
Self Extracting Perl Archive - SIP v0.45 on Jan 01, 2004 at 12:49 UTC | by #include |
This is a complete rewrite of Self Extracting Perl Archive, turning it into a complete, usable utility named SIP (Sip Isn't Par).
The latest version will always be avaliable from the SIP Website.
Takes a list of files and converts them into a Perl script that extracts the files when ran, with the ability to display text or files, check file integrity with MD5 hashes, run commands before and after extraction, and create 'silent' extractors. SIP's main purpose is NOT to be an alternative to par, but to be the basis for a Perl software installation system. Uses File::Basename, Digest::MD5, File::Find, File::Path, and Getopt::Mixed.
Usage:
sip.pl OPTIONS file,file,...
Options:
-v,--version Print version and exit
-h,--help Print this text
-n,--nobanner Do not print banner in output script
-N,--name text Changes the name displayed in the banner
-s,--silent Output script executes silently
-p,--text text Print text at beginning of output script
-P,--print filename Prints the contents of a text file at beginning of output script
-m,--md5 Verify file integrity. Output script will require Digest::MD5
-f,--force Force extraction of damaged files
-o,--output filename Write output to file
-w,--overwrite Automatically overwrite existing files with output
-r,--run command Execute a command after extraction
-R,--pre command Execute a command before extraction
-d,--dir path Add all files in directory to script
-D,--recursive path Add all files in directory (recursive) to script. Directory structure is recreated on extraction.
-t,--temp Extract all files to temp directory
-l,--location directory Extract files to the specified directory
-b,--noshebang Do not add shebang to output
-B,--shebang text Adds a shebang other than #!/usr/bin/perl
UPDATE: Added the -r option, so you can use SEPA scripts as installers.
UPDATE: Added the -R and -f options, making using SEPA as an installer system more viable.
UPDATE: Added the -d and -D options, making it possible to add all files in a directory to a script
UPDATE: Added the -t option, allowing for temp directory extraction, and changed the name of the program to reflect on some comments :-)
UPDATE: Fixed a bug with the -r and -i options together
UPDATE: Fixed a bug with the -D option, and added the -e option
UPDATE: Fixed a whole slew of bugs and added the -l option. If the -D option is used, the directory structure is recreated upon extraction.
UPDATE: Even more bugfixes and options. Added the -b,-B,-w, and -N options, allowing you to remove the shebang from the output script, add a different shebang, automatically overwrite files on output, or change the name displayed in the banner, respectively.
UPDATE: More bugfixes and I added POD documentation.
|
filemod on Dec 31, 2003 at 20:10 UTC | by neuroball |
Small script that recursively searches the specified or current working directory for files with a modified timestamp in a specified date/time range.
Update: Detabed source code.
|
Self-Extracting Perl Archive on Dec 29, 2003 at 00:40 UTC | by #include |
Takes a list of files and converts them into a Perl script that extracts the files when ran. Uses File::Basename and Digest::MD5. The script is printed to STDOUT, and requires nothing but Perl and Digest::MD5.
SUPER MAJOR UPDATE: Removed the GUI, and added file integrity checking with MD5 hashes.
|
mvre - MoVe files based on given Regular Expressions on Nov 21, 2003 at 03:44 UTC | by parv |
UPDATE, Nov 24 2003: Completely updated the code as of version 0.41, fixes
check for existing file w/ few new options/features.
This modules requires
Parv::Util,
that i use quite often. Most current version of "mvre" is located at
http://www103.pair.com/parv/comp/src/perl/mvre.
Back to mvre; pod2html has this to say...
In default mode, mvre moves file located in the current
directory, matching (Perl) regex, to directory specified by
-out-dir. An output directory is required in all cases. See
-out-dir and -out-find-path options.
Files to be moved, also referred as ``input files'', are specified either
via -in-select option and/or as argument(s) given
after all the options.
Order of input selection regex, -in-select or -select
option, and output directory specified is important. One-to-one relation
exists between each regex to select input files and the output directory,
either explicitly specified or found in path specified via
-out-file-path option.
|
onchange - a script to run a command when a file changes on Nov 20, 2003 at 20:19 UTC | by samtregar |
I like to write articles in POD and preview the results of running pod2html in my web browser. However, this requires me to run a command in a shell everytime I make an edit. Even if it's just 'make' I still can't be bothered.
So I wrote this script to run a command everytime a file changes. It requires Time::HiRes and Getopt::Long. Read the POD for more information.
UPDATE: I added code to do a recursive walk when onchange is passed a directory. This enables onchange to watch a whole directory tree. It won't notice new files added though... (Feb 24th 2005) |
read/write tester on Nov 19, 2003 at 12:20 UTC | by bronto |
There were problems with an Oracle database writing data over NFS here. A script was needed to do some random read/write parallel activity and check where things were going wrong.
If you call this script as
./dbcreate.pl 15 8k 90 1m 100m test.txt
it will spawn 15 children, read/write random data in 8kb blocks, doing 90% of input and 10% of output; it will initialize the test file to 1Mb size, will read/write 100Mb of data; the test file will be test.txt
I hope you'll find it useful
--bronto |
rename 0.3 - now with two extra cupholders on Nov 01, 2003 at 15:43 UTC | by Aristotle |
Update: originally forgot to include shell metacharacter cleansing in --sanitize.
This is a much improved version of the script I posted at rename 0.2 - an improved version of the script which comes with Perl, which in turn is an evolution of the script that comes with Perl. (PodMaster informs me that it's only included in ActivePerl. I haven't looked.)
It now does everything I could personally ask for in such a script; thanks particularly to graff and sauoq for feedback and food for thought.
I also stole a few options from Peder Strey's rename on CPAN. That one has additional options for finely grained control over keeping backups of the files under their old names; personally, I don't see the merit. If you do, please let me know. In either case, even if anyone thinks such facilities would be good to have, I feel they should be provided by a more general mechanism. After all, this is a script you can pass Perl code to; while there's good reason to optimize for the common case, I feel it is better to leave the specialised cases to the expressive prowess of Perl rather than try to invent a narrowly defined interface for them.
Blue-sky stuff: just yesterday I also decided this is basically a perfect vehicle to build a batch MP3 processor onto. Now I plan to eventually add facilities for querying as well manipulating the ID3 tags in MP3 files alongside their filenames. Given a cleanly integrated interface, this script would naturally lend itself to that task and become a MP3 renamer to end all MP3 renamers - without even focussing on that task. Of course all MP3 processing stuff would be optional and its main purpose would still be plain old renaming of files.
Anyway, without further ado, have at it. Please give this a thorough whirl and let me know of any kinks. |
DBI SQL Query tool on Oct 27, 2003 at 21:15 UTC | by runrig |
I use this to execute sql from Vim. Written because all the query tools available to me on the PC suck, so it somewhat emulates a query tool on unix that I like, which displays the selected fields across the page in columns if they will fit on the screen, but displays them vertically down the screen if they won't fit. I also display database column types, because I often see column names such as 'item_no', and there are often mostly just numbers in the column, but I'd like to know if its really a character column.
In this script, I assume all connections are through ODBC (easy enough to change that though), and if you are wondering what all the logic is about with the dsn and dbname, it is because in my version, I do alot of convoluted mapping of database names to user/passwords and where to find dsn-less connection strings. Creating a dsn-less connection is easy, you just create a file dsn to the type of database you need, then use the contents of that file as a template for the dsn variable, substituting the desired database name if needed. This uses code from WxPerl Login Dialog, but with my default db/user/password/dsn mappings, I rarely call that module.
This behaves a bit odd on SQL Server databases, for instance, it thinks update statements are really select statements, and a couple of rows are fetched (and returned!) from the database. I'm not sure if anything can be done about this short of scanning the sql statement beforehand, but I get amused every time it happens, so I leave it as is :-)...update: actually, it only seems to be on certain update statements in one particular database...weird
Enjoy.
Updated 2004-03-30 |
rename 0.2 - an improved version of the script which comes with Perl on Oct 25, 2003 at 22:14 UTC | by Aristotle |
Update: obsolete, please check rename 0.3 - now with two extra cupholders instead.
You probably know the script that comes with Perl. Initially, I started hacking on it because I didn't want to pull out the old rename binary for very simple substitutions, but found it too cumbersome to write a Perl s/// for the same job. Then, feeping creaturism set in and I started adding more and more little stuff.. eventually, it grew to something I wouldn't want to miss from life on the command line. |
cpandiff - diff local source against CPAN on Oct 24, 2003 at 05:47 UTC | by diotalevi |
This compares a local module distribution against the current CPAN version and produces a unified, recursive diff. |
Idealized optrees from B::Concise on Oct 21, 2003 at 21:50 UTC | by diotalevi |
This alters the output of B::Concise so you can view an idealized optree. It remove all the execution order, context and null nodes away until the output is nicely readable. perl -MO=Concise -e '...' | ./idealized_ops That transforms something like this: l <@> leavet1 vKP/REFC ->(end)
1 <0> enter ->2
2 <;> nextstate(main 2 -e:1) v ->3
k <2> leaveloop vK/2 ->l
7 <{> enteriter(next->g last->k redo->8) lKS ->i
- <0> ex-pushmark s ->3
- <1> ex-list lK ->6
3 <0> pushmark s ->4
4 <$> const(IV 1) s ->5
5 <$> const(IV 100) s ->6
6 <$> gv(*_) s ->7
- <1> null vK/1 ->k
j <|> and(other->8) vK/1 ->k
i <0> iter s ->j
- <@> lineseq vK ->-
8 <;> nextstate(main 1 -e:1) v ->9
- <1> null vK/1 ->g
c <|> and(other->d) vK/1 ->g
b <2> lt sK/2 ->c
- <1> ex-rv2sv sK/1 ->a
9 <$> gvsv(*_) s ->a
a <$> const(IV 50) s ->b
f <@> print vK ->g
d <0> pushmark s ->e
- <1> ex-rv2sv sK/1 ->f
e <$> gvsv(*_) s ->f
g <0> unstack v ->h
h <;> nextstate(main 2 -e:1) v ->i Into this: leave
enter
nextstate
leaveloop
enteriter
pushmark
const
const
gv
null
and
iter
lineseq
nextstate
null
and
lt
gvsv
const
print
pushmark
gvsv
unstack
nextstate |
Snapshot.pm on Oct 19, 2003 at 11:53 UTC | by robartes |
This module implements a way of taking directory structure snapshots using the rsync/hardlink method from Hack #74 in Linux Server Hacks. It's fairly basic for the moment, and limited to Unix platforms. Future versions will become more universal through the use of link and File::Find. |
pqset on Sep 30, 2003 at 15:14 UTC | by neilwatson |
pqset enables you to check and set user disk quotas on a linux system.
03/10/02 updated code with cleaner option/switches. |
Deleting a Subtree from LDAP on Sep 17, 2003 at 20:39 UTC | by mayaTheCat |
hi monks,
the following code recursively deletes a subtree from LDAP;
since, only leaf nodes can be deleted from ldap, this code, firstly, traverses the subtree, and then deletes the node in the reverse order it has been traversed.
if the port is other than the default port (389), it can be appended at the back of the server string, delimited with a ':';
e.g. if the server is ldapserver and the port is 889, then the following string works: 'ldapserver:889'.
if we do not want to stress the server, we can periodically
pause the deletion for a while through the parameters $sleepPeriod and sleepDuration
|
css: Counter Strike Scanner on Aug 10, 2003 at 03:09 UTC | by skyknight |
I enjoy a good game of Counter Strike just as much as the next nerd, but I hate the interfaces for getting into a game. There are a small number of servers on which I like to play, and if there isn't a good game on any of them, I'd rather not waste my time at all. As such, I want a quick way to know if there are some good games available Half-Life itself is too heavy weight in its startup. GameSpy is also slow to start up, and furthermore extrememly annoying as you are forced to watch all kinds of crappy advertising. What I wanted was a simple command line script that quickly apprises me of what's up on my favorite servers, and so I wrote this. You can simply specify a server address and port on the command line, or use a -f switch and a name for a file that contains a bunch of lines, on each of which is a server address and port, separated by one or more spaces. Thus usage is very straightforward. The output is nicely formatted as well. As one minor tweak, server names have spaces translated to underscores, so as to allow for distinct space separated fields for passing off to other command line scripts for further processing, say, if you wanted to sort based on ping time. Enjoy! Oh yeah... Half-Life doesn't use a nice text based protocol. You get to wrangle with messy pack and unpack statements. Yum. |
pmdesc2 - lists modules with description on Aug 05, 2003 at 22:23 UTC | by Aristotle |
I recently looked at Tom Christiansen's scripts. One of them, called pmdesc, lists any or a subset of the modules you have installed, complete with the version number and a description. Handy! Unfortunately, it has several annoying traits. First of all, it's slow. Which one might live with, except it also picks up modules relative to wrong directories, so Foo::Bar might be reported, f.ex, as i686-linux::Foo::Bar.
No problem, I thought, I'll just hack it. Unfortunately, the source is, well, less than tidy, with several unnecessary-to-be globals and badly distributed responsibilities. For so little code, it is suprisingly confusing to follow.
So what's a hacker to do, eh? Here's a clean version.
I fixed the directory problem by visiting the longest paths first, which ensures we see any subdirectories prior to their ancestors while traversing the trees.
Speed was addressed by using an ExtUtils::MakeMaker utility function. While this imposes restrictions on the $VERSION assignments this script can cope with, CPAN uses the same function, so anything from CPAN is likely to comply anyway. Compared to the old code which had to actually compile each module, this is orders of magnitude faster. |
Let the staff back in! on Aug 01, 2003 at 09:45 UTC | by Intrepid |
rectify-perms-perl-sitedirs
A *NIX Perl installation administration / maintainance utility.
This script is a specific instance of the "recurse and fix permissions / file ownership"
-type that has been written (doubtless) many times over by *nix system admins around
the globe. As such it isn't very special but might still be particularly welcomed by
newbies and those not used to quickly writing scripts for such admin tasks.
Please do not attempt to read this explanation if you already have started to
get a headache or tend to get them. It will most likely make it worse. ;-)
|
vimDebug on Jul 26, 2003 at 18:45 UTC | by toiletmonster |
an integrated debugger for vim. now you can step through your perl code inside vim. also supports jdb (java), gdb, and pdb (python).
get the latest version at http://vim.sf.net/scripts/script.php?script_id=663 or
http://iijo.org
comments, bugs, suggestions, please!!
|
(code) Command Antivirus Update Fetch+Extract on Jul 23, 2003 at 14:02 UTC | by ybiC |
Checks Command Antivirus website for updated virus definition file, and fetches if newer than local copy. Then inflates the self-extracting executable to an 'ms patch' file named for it's release date. This msp file is copied to latest.msp which can be run (outside this program) to update client PCs.
From a perlish perspective, building this has been a great refresher, especially on minimizing the number of global variables, as I've been Away From Perl for months. It's good to be back, even for code as straightforward as this. 8^)
Sample run:
http://download.commandcom.com/CSAV/deffiles/DEFMSP.EXE: 200 OK
PKSFX(R) Version 2.50 FAST! Self Extract Utility for Windows 95/NT 4-15-1998
Extracting files from .ZIP: c:/DEFMSP.EXE
Inflating: 030717.msp
1 valid msp file(s):
030717.msp
Latest msp: 030717.msp
Today: 030723
030717.msp copied to current.msp
Done... <enter> to exit
|
uniq_exec.pl - filters duplicate program output on Jul 18, 2003 at 20:12 UTC | by diotalevi |
This is sort of like uniq for programs - the script only prints your program's output when it is different than the last time. I wrote this so I could put `whois -i somedomain.org` in a cron job and only get notices when Network Solutions' database changes. You could use this with lynx or wget to see when a web page changes, etc. I assume this already some common unix tool so please let me know which one that is, I only wrote this because I didn't know what the tool was called. |
Yet another personal backup script on Jul 10, 2003 at 12:55 UTC | by EdwardG |
Yet another personal backup script. |
Arquivo de Log on Jul 08, 2003 at 18:48 UTC | by Mago |
Para os membros ou futuros membros da lingua portuguesa.
Esta funcções servem para que seja possível montar um arquivo de Log ou Trace de forma fácil.
Algumas alterações podem ser necessárias, para que seja atendido a necessidade do programa.
Você deve sempre abrir o arquivo de log.
Com o arquivo de log aberto vc poderá utilizar o envio de mesnsagem e código de erro para o controle do arquivo.
Sempre antes de finalizar o programa, você deve passar um código de retorno para que o arquivo seja fechado corretamente.
Ps. Lembre-se que este código é uma referencia para facilitar o desenvolvimento do programa.
|
Perl script commenting assistant on Jul 06, 2003 at 20:13 UTC | by Wafel |
Seeks out interesting tidbits of code and interactively assists in commenting them. The code is ugly though it might come in hande for commenting those old scripts that you've got lying around.. |
Value Restoration on Jun 27, 2003 at 13:13 UTC | by Lhamo_rin |
Thanks to the help I received from my fellow monks I was able to finish this script. I use it to restore certain values after a power failure that would, by default, be reset to 0. The user is prompted with the option of restoring the old values, overwriting with new, or canceling altogether. The read_file is just variable names and values separated by whitespace. I though someone might be able to use it. |
SXW Writer on Jun 13, 2003 at 14:56 UTC | by benn |
I needed something to write out Open Office .sxw files, and was suprised to find nothing on CPAN, so I spent a couple of days knocking this up. (Typical huh? Just doing it in Open Office would take me 5 minutes :)).
It's very raw, but it works, and may serve somebody as a starting off point for something else. All the documentation is in POD at the bottom. Enjoy.
Cheers,Ben.
Update added add_font example to synopsis |
Create/Extract Zip Archives on Jun 03, 2003 at 04:22 UTC | by #include |
Using this script, you can create and extract zip archives. Not only is this script functional as a utility, it also demonstrates usage of the Archive::Zip module. |
peek - output a one-line preview of stdin on Apr 27, 2003 at 17:36 UTC | by halley |
Some commands give visual feedback that is either feast or flood. They either remain blissfully silent while they work on a long task, or they blather endlessly about every step of their progress.
Many times, it'd be handy to have feedback somewhere in between. You can still see it's running, but it doesn't scroll your terminal out of sight.
This is essentially a Perl one-liner, written to a script. It's trivial. It's not about how tough the task is, but whether you find it useful.
Update: printf and --keep suggestions implemented |
Find Scripts and Make Executable on Apr 22, 2003 at 04:16 UTC | by The Mad Hatter |
The other day I had a large number of Perl and shell scripts that were not marked executable. They needed to be, but there was a great deal of them scattered all over the place (a source tree directory, to be precise) and I wasn't about to find and chmod them myself. This is the result.
It will check the first line of a file to see if it starts with a shebang (#!), and if it does, will make it executable for the user who owns it. I shell out to the system chmod because it was simpler and quicker to use the symbolic permissions notation (u+x) instead of stat-ing the file, adding the correct value to the octal permissions, and then using Perl's chmod with that modified value.
File names to check are specified as arguments. I used the script in combination with find: find . -type f -exec isscript \{\} \; |
New switches for perl(1) on Apr 21, 2003 at 13:06 UTC | by Aristotle |
If, like me, the vast majority of oneliners you write are -n or -p ones, you'll probably have cursed at the verbosity and unwieldiness of the -e'BEGIN { $foo } s/bar/baz; END { $quux }' construct.
Hey, I thought, I can do better than that.
So I ripped apart the Getopt::Std code and based this script on it, which adds two options to Perl:
- -B
- This works just like -e, except it also wraps the code in a BEGIN block.
- -E
- This also works like -e, except it wraps the code in a END block.
Enjoy.
Update: changed hardcoded location of Perl binary to $^X in last line as per bart's suggestion. |
unhead and untail on Apr 11, 2003 at 22:14 UTC | by halley |
The historically common head(1) and tail(1) commands are for keeping the head or tail of stream input, usually by a count of lines. This pair of scripts differ in three respects:
- these scripts don't show the head or tail, they show everything except the head or tail,
- these scripts don't count lines but instead work on a single regex (regular expression) to find a matching "cut here" point in the text,
- these scripts can actually modify/edit text files in place if filenames are given.
Check out the pod output for a complete man-page.
|
Anonymous User Add For Linux Shell on Mar 29, 2003 at 03:08 UTC | by lacertus |
I am 'presiding madman' of a Chicago based LUG, and I thought it might be apropos to allow users who've no experience with a *nix shell to be able to create their own account on the public webserver. Essentially, I have created a password protect 'newuser' account, whose information I give out upon a member's registration so they can log in. The password file has no 'shell' per se for this 'newuser' accnt; rather, this script is the shell. Of course, you must be quite careful with something like this, and while I have addressed all the security issues that come to mind, I'm sure this script isn't vulnerability free. Feel free to contact me with questions/suggestions/patches (if yer real cool ;)
The script allows for a newuser to create a username and assign a password of their choice; what's more, it logs all newusers, emails, etc, and also emails all this info to the administrator, so you can keep appraised of what's going down. Enjoy!
Ciao for Now,
Lacertus |
[vt.ban] simple bannerscanner on Mar 18, 2003 at 10:25 UTC | by photon |
this is a little bannerscanner which can send specific strings to different ports and dump the output..
very uggly code.. you can use it i.e. to check the versions of different network services.. |
Ascii Chart on Feb 24, 2003 at 23:04 UTC | by runrig |
Display an ascii chart similar to the ascii man page, in a variety of formats. |
nessus port reporter on Jan 25, 2003 at 01:27 UTC | by semio |
I've had the requirement lately to work with a large amount of security data in the NessusWX export file (.enx) format. This script will associate all ports with their respective IP in a ordered, grepable format. Hopefully someone will find it to be useful. Comments are very welcome. |
podwatch: POD previewing tool on Jan 17, 2003 at 23:21 UTC | by adrianh |
When I started writing POD documents I got bored with previewing things with perldoc. Every time I made an edit I had to start up perldoc again, find where I had made the change, only to discover it wasn't quite right. Repeat until done.
To make this a little easier I threw together podwatch. A (very basic) POD viewer that tracks changes to the source file. When the source changes podwatch reads it in again, and moves to the place where the output changed.
Now I edit POD in one window, with podwatch running in another. Instant feedback whenever I hit save.
Hope you find it useful. |
Splitting up XChat log files on Jan 12, 2003 at 06:54 UTC | by Paladin |
Script to split up XChat log files based on the time and date of the conversation. |
Url2Link 0.1 GUI/TK on Jan 12, 2003 at 05:12 UTC | by m_dv |
Simple PERL/TK program that reads from a textbox input lines that have a url at the begging and converts the urls into actual link files (with the name of url's domain). I did it because I had a file with a bunch of urls that I wanted to put in my "Favorite" folder. Any comments are welcome. |
change/limit already existing file names to [-_.0-9a-zA-Z]+ on Jan 09, 2003 at 09:19 UTC | by parv |
sanefilename.perl changes characters in file names which are not
composed of '[-_.a-zA-Z0-9]' characters. and...
- all the characters not matching '[-_.a-zA-Z0-9]' are converted to '-'.
- '-_' or '_-' sequence is changed to single '-'.
- any sequence of '.-', '._', '-.', '_.' is changed to single '.'.
- multiple occurrences of [-_] are changed to one.
in case of surprise(s), refer to the source code. it is also avilable
from...
http://www103.pair.com/parv/comp/src/perl/sanename.perl
|
template on Jan 08, 2003 at 22:09 UTC | by JaWi |
This little script allows you to create skeleton files based on simple templates. It provides a simple form of variable expansion which can be used to automatically fill in certain parts of your (future) code.
For example, you can create a template to automatically generate C++ or Java classes. |
Ensure Zip files always unpack to a single subdirectory on Jan 07, 2003 at 08:28 UTC | by bbfu |
hardburn requested a utility that would ensure that Zip files did not "explode", or unpack files into the current directory, by extracting into a subdirectory if the Zip file was not already set up to do so. |
Interlaced duplicate file finder on Jan 06, 2003 at 21:18 UTC | by abell |
A script to find and remove duplicate files in one or more directory. It serves the same purpose as salvadors's module (see also File::Find::Duplicates), but it's more efficient when discriminating different files of the same size.*
The program gets a speed-up by reducing file reads to a minimum. In most cases, it only reads small chunks from unique files and only files with duplicates are read completely. Thus, it is particularly fit for big collections of audio files or images (shameless advertisement ;).*
* - added and revised explanation, inspired by merlyn's comment. |
breathe - beyond 'nice' on Jan 02, 2003 at 17:27 UTC | by tye |
Run code in the background even more nicely than is possible with 'nice'.
We have administrative tasks that run on the PerlMonks database server regularly. These make the site quite sluggish even if run via "nice -99". So I wrote this.
It runs a command nice'd but every few seconds it suspends the command for several seconds.
Updated.
|
Create Daily Backups of Scripts and Datafiles Inside a Web Site on Dec 29, 2002 at 08:14 UTC | by jpfarmer |
I work on a website working with several programmers, all vastly different styles of writing/testing code. Every so often, a script or data file will get clobbered and the most recent total-site backup may be several weeks old at best (mainly because the whole site backup is so large).
In response to that problem, I wrote this script to back up the files that would normally get changed during development. I run it nightly via a cron job.
This is my first code submission, so please give me any feedback you may have. |
(code) Cross-platform unlink all but $n newest $filespec in $dir on Nov 26, 2002 at 20:52 UTC | by ybiC |
Delete all but "n" newest files of given filespec from specified directory. Accepts filesystem wildcards like * and ? as filespec arguments. The code line that actually unlinks files is commented out - uncomment once you're comfortable with how options and arguments operate. Tested with Perl 5.6.1 on Debian 3, Win2kPro, WinNT plus Perl 5.8.0 on Cygwin.
It's entirely possible that this might be done in fewer LOC using File::Find. Nonetheless, has been a good exercise/refresher for /me on stat, sort, cmp, regexen, and glob.
Thanks to the following monks for direction, clues, and answers to brain-mushing questions: Petruchio, jkahn, Undermine, Zaxo, theorbtwo, fever, BrowserUk, tye, belg4mit, PodMaster, and Mr. Muskrat. And to some guy named vroom.
Update: see pod UPDATES
|
Remove eMpTy Directories on Nov 18, 2002 at 16:45 UTC | by tye |
Traverses a directory tree deleting any directories that are empty or only contain subdirectories that get deleted. Updated. |
oldfiles on Nov 14, 2002 at 19:18 UTC | by neilwatson |
Recover your disk space!
oldfiles searches a directory and sub directories for files of a certain age and size. A report is emailed to the owners of those files asking them to remove or archive them.
This is a first draft. Everyone is encouraged to make comments and suggestions.
Thank you. |
resub on Nov 08, 2002 at 09:02 UTC | by graff |
Do any number of global regex substitutions uniformly over
any number of text files, and correctly handle all character
encodings supported by Perl 5.8.0, with optional conversion
of data from one encoding to another. (update: fixed checks
for valid regexes) |
force.pl on Oct 26, 2002 at 21:48 UTC | by ixo111 |
perl translation of the old classic 'force.c', allowing
you to force input to a terminal device. even though
it saw a lot of illegitimate use in the past, i've found
quite a few legitimate uses for it over the years - the ioctl/fcntl defs are hard-coded at the top and may need to be changed for other systems (consult your ioctl.ph, ioctls.ph and fcntl.ph for the proper values - consult h2ph if you do not have these files in your perl tree)
|
robustly list any Perl code's module dependencies on Oct 06, 2002 at 11:39 UTC | by Aristotle |
I just read hans_moleman's script and thought there has to be a more robust way to do this that doesn't rely on parsing sources and has better support for recursive dependencies than just reporting whether a toplevel dependency is satisfied. This module is the surprisingly simple result. It relies on the fact that you can put coderefs in @INC.
Update: podmaster points out that not all pragmas are available everywhere, warnings being an obvious example, so they constitute a depency, too. I don't want to change the behaviour of this module however, therefor I added this to the CAVEATS section in the POD. |
Check a script's module dependencies on Oct 06, 2002 at 03:09 UTC | by hans_moleman |
I put together this code to help migrating Perl scripts from one environment to another. The scripts are mostly CGIs, and I found it annoying and time consuming to either - run the scripts to see what breaks
or
- read through each script manually
. Using this little script you can check whether all needed modules are available.
Comments and suggestions are appreciated as always...
Update: Changed the logic in my if statement to reflect podmaster's CB suggestion... |
Tk Quick Benchmark Tool on Sep 22, 2002 at 04:54 UTC | by hiseldl |
This script is a user interface for comparing short snippets of code and was inspired by BrowserUk, bronto, and Aristotle at this node. There are several other nodes that have discussions about performance and using the Benchmark module, this is just the last one I read that made me want to write this code.
There are 2 buttons, 4 text widgets in which to enter text, 1 text widget to show the output, and an adjuster.
- Clear Button - clears all the text boxes and reset's the count to 1000.
- Run Button - runs the tests using cmpthese from the Benchmark module. The output text widget will be cleared before the test is run, you can turn this off by commenting out the following line in the OnRun method:
$tk{output_text}->delete(0.1, 'end');
- count - this is the first argument to cmpthese.
- test1 - this is the first snippet to be tested. An example:
mapgen => 'my @ones = mapgen 1, 1000;'
- test2 - this is the second snippet to be tested. An example:
xgen => 'my @ones = xgen 1, 1000;'
- code - this is where any supporting code should be typed; this field is not required. An example:
sub mapgen { return map $_[0], (1..$_[1]); }
sub xgen { return ($_[0]) x $_[1]; }
- output - this is where the output from cmpthese will appear as well as the code that was eval'd. An example:
COUNT=1000
TEST CODE:
{mapgen => 'my @ones = mapgen 1, 1000;',
xgen => 'my @ones = xgen 1, 1000;',}
SUPPORT CODE:
sub mapgen { return map $_[0], (1..$_[1]); }
sub xgen { return ($_[0]) x $_[1]; }
RESULTS:
Benchmark: timing 1000 iterations of mapgen, xgen...
mapgen: 2 wallclock secs ( 2.14 usr + 0.00 sys =
2.14 CPU) @ 466.64/s (n=1000)
xgen: 1 wallclock secs ( 1.16 usr + 0.01 sys =
1.17 CPU) @ 853.24/s (n=1000)
Rate mapgen xgen
mapgen 467/s -- -45%
xgen 853/s 83% --
Happy Benchmarking!
-- hiseldl "Act better than you feel"
|
Storable 2 Text - An editor for data files created by Storable.pm on Aug 08, 2002 at 13:57 UTC | by kingman |
Opens a file created by Storable::lock_store and dumps the data structure to an ascii file for viewing/editing.
Also creates a file via Storable::lock_store that contains an empty hash if you create a symlink to the script called ts (touch store). |
Mirror only the installable parts of CPAN on Aug 08, 2002 at 06:10 UTC | by merlyn |
As noted in a parallel thread, I have this short program which can mirror a complete set of the installable modules for use with CPAN.pm.
This is for review purposes only. A final version of this code will appear in my LM column. Comments are welcome.
WARNING: |
As stated, this was a preliminary version of this program for comment only. While writing
the column, I fixed a few bugs.
Do not use the version here. Use the version there instead.
|
|
diffsquid - find the differences in Squid configuration files on Aug 07, 2002 at 15:06 UTC | by grinder |
Analyse two squid configuration files, and report parameters that are
present in one file but not in the other, or have different values.
Also attempt to identify valid parameter names in the comments and
report on those as well (useful when new versions are released). |
rcmd.pl on Jul 25, 2002 at 12:26 UTC | by greenFox |
rcmd.pl -a utility for running the same command across a group of hosts.
|
dgrep - Wrapper around gnu find & grep on Jul 22, 2002 at 20:10 UTC | by domm |
I just cannot remember how to run find and grep together. After reading the FM once too often, I wrote this small wrapper..
Pass it a pseudo-regex (to match the files) and another one to look for in all files.
Example:
% dgrep .pm foo
Will look for "foo" in all files ending in ".pm" in the current and lower directories.
Edited:
~Tue Jul 23 15:24:49 2002 (GMT),
by footpad: Added <code> tags to the code.
|
win2unix on Jul 20, 2002 at 08:03 UTC | by ackohno |
This is a little script i came up with to get those ^M's out of files that come in the downloaded source here at PerlMonks. Given one argument (a file name), the script removes the ^M's from that file; given two, the first is input and second is output. If the if statment matching for the perl shebang is removed, this script can be used to remove the ^M's from any file. Without that if statment, there may be a new line at the begining of the file witch will cause the script not to run.
|
CheckPoint rule auditor on Jul 12, 2002 at 04:08 UTC | by semio |
This script was designed to help me gain insight into rule utilization on the Check Point Firewalls I maintain e.g. rules most heavily used or, conversely, rules not being used at all. Its input is any semi-colon delimited file created using logexport on the Firewall. Works on 4.1 and NG |
backfiles on Jun 27, 2002 at 03:23 UTC | by xiphias |
Backs up files using tar into dirs in backlist.txt.
Quite simple |
MODULATOR on Jun 21, 2002 at 01:39 UTC | by epoptai |
Browse pod and code of installed perl modules in a handy frameset. Lists each installed perl module linked to an HTML rendering of its pod if any, and to its source code. Option to automatically put synopsis code into a form for easy testing via eval (this is both powerful and dangerous, use caution). Lists environment variables and result of various path and url finding methods. Here's a screenshot.
Updates:
fixed problem with "refresh cache" not refreshing the cache.
added "no header" option to code eval, for testing output of modules like GD.
implemented this fix suggested by perigeeV.
added a link to the perl module list.
added function to list module source code with numbered lines.
Added a CPAN search form. |
nnml2mbox on Jun 11, 2002 at 15:45 UTC | by mikeirw |
I needed to view the contents of a nnml mail directory, but didn't have
access to Emacs or Gnus, so I whipped up this simple script to allow me to
use mail -f instead. I must say that I'm a Perl newbie, so it may need some
work. If so, I'll appreciate any comments.
A quick note: I did not include any code to match Gcc'ed emails (which
doesn't generate a From header), so you may need to add that before
running. |
Statistical data analysis on Jun 05, 2002 at 12:56 UTC | by moxliukas |
This short program outputs some statistical analysis data given the input data in two tab separated columns, first one being X column, and the second one Y column. It calculates means, quartiles, median, variance and standard deviation for both sets of data. It also outputs various sumations (X, X^2, Y, Y^2 and X*Y). It then calculates covariance, linear correlaton coeficient and determinance and finally comes up with linear regression equation.
Most of this is simple and straightforward maths and I do hope it will prove useful to someone (well, I have used this script for my statistics lectures). |
Fake daemon on May 30, 2002 at 02:31 UTC | by hagus |
A script I dug out of my archives. I submit it here in the hope that someone might find some useful sample techniques, despite its hurried appearance. I wrote it awhile ago with the following goals:
To make a non-daemon process run as if it were a daemon (ie. give it a controlling terminal).
To collect the stderr and stdout streams from that process uninterleaved (is that a word?).
To restart the process at a particular time each day.
To restart the process should it die unexpectantly.
Things needing fixing that I can see:
Signal handling is below par. I don't understand it very well, as I seldom have to handle signals in perl.
Restart time is hardcoded - it really should take either a maximum run-time argument, or a date string which is parsed.
Command line arguments, anyone?
Handling infinite loops when restarting the process. Ie. if restart occurs more than x times in y seconds, sleep for z or exit.
Other stylistic or design problems people might see?
|
MySQL backup on May 24, 2002 at 23:25 UTC | by penguinfuz |
This script backs up each MySQL database into individual gzip'd files; Useful in shared environments where many users have their own MySQL databases and wish to have daily backups of their own data.
UPDATE: 17/07/2003
- Now using bzip2 for better compression
- Removed connect() subroutine
TODO
- Read db owners from a config file and automatically deliver
backups to the appropriate ~user dir.
|
Service Health Scanner on May 19, 2002 at 11:23 UTC | by penguinfuz |
Basically the script first tries to resolve the domain name, if it failes you get email notification telling you which domain could not be resolved. If the domain resolves ok, the domain is then scanned for whatever service is specified, if this fails you recieve a notification of which domain is having trouble.
The subject line of the email notifications have been tailored for output to a mobile telephone and/or pager.
UPDATE: Complete rewrite, still checking for DNS resolution before moving forward, but now using LWP::Simple to check web/SSL connectivity, and email notifications are more user-friendly in that the failed service and hostname is listed in the SOS.
|
NFS Watcher on May 16, 2002 at 13:13 UTC | by penguinfuz |
I wrote this script to address an issue I had with Apache authentication and multiple load-balanced web servers, and I run it from a cronjob every so often.
Basically I keep ONE copy of the relevent .htaccess files and NFS export their location to the other web servers in the cluster.
I am sure this script could be extended further, but I have not messed with it since it serves the basic purpose I started towards. I hope someone else will find this code useful as well. ;)
UPDATE:
Added "use strict;" and found a missing semicolon at: my $ip_add = "192.168.0.10"
And a global @mntargs; hanging around. |
gdynfont.pl - A Gimp plug_in on May 13, 2002 at 21:17 UTC | by simonflk |
This plugin will display information about a GDynText layer in the Gimp.
Background
I needed to find out the name of the font that I used in a .XCF a few months ago. I didn't have the font installed anymore, so GDynText selected the first font in the list. It may be that I have overlooked a simpler way of working out the font, nevertheless, this was a nice excursion into Gimp-Perl. Unless you are in a similar situation (no longer have a font, or someone gives you a Gimp file without associated fonts), this will be pretty useless because it doesn't display anything that you can't get from GDynText itself.
Installation
Copy this script into your ~/.gimp-1.2/plug-ins folder and make it executable.
Usage:
Select a GDynText layer and then select GDynFontInfo from the <Image>/Script-fu/Utils menu |
dfmon - Disk Free Monitor on May 09, 2002 at 12:04 UTC | by rob_au |
This little script is a rewrite of a very nasty bash script that for a long time formed a core crontab entry on systems which I administer. This code allows for the alerting of the system administrator via email when the disk space on any of the mounted partitions drops below a set threshold.
The corresponding template file used for the email sent to the system administrator may look similar to the following:
This is an automatically generated message.
The following file systems have reached a storage capacity greater
than the alert threshold set within the dfmon.perl administrative
script.
[% FOREACH fs = filesystems %]
[% FILTER format('%-10s') %][% fs.mountpoint %][% END -%]
[%- FILTER format('%15s') %][% fs.blocks_total %][% END -%]
[%- FILTER format('%10s') %][% fs.blocks_used %][% END -%]
[%- FILTER format('%10s') %][% fs.blocks_free %][% END -%]
[%- FILTER format('%5s%%') %][% fs.percent %][% END -%]
[%- END %]
--
|
VBA 2 Perl on May 08, 2002 at 18:37 UTC | by Mr. Muskrat |
vba2pl reads a VBA macro file and attempts to translate the contents to perl. It outputs to a file with the same path and base name but a .pl extension.
It's far from finished although it has come a long way since I started on it last night.
It got its start in my follow up to Win32 - M$ Outlook and Perl.
Mandatory "Dark Side" quotes...
"If you only knew the power of the Dark Side of the Force" - Darth Vader
"Once you start down the Dark Path, forever will it dominate your destiny, consume you it will..." - Yoda
|
SSManager on May 05, 2002 at 22:01 UTC | by Anonymous Monk |
This module wraps a number of SourceSafe OLE Server functions in one-step function calls. |
Duplicate file bounty hunter on Apr 24, 2002 at 00:42 UTC | by djw |
This will search out a directory recursively for duplicate files of a specified size (default: 100 MB). It logs everything and makes you dinner.
Enjoy,
djw
*update* 04.24.2002
After some suggestions by fellow monks, I have changed this to use Digest::MD5. Thanks everyone++.
djw |
Summarize Orange phone bill on Apr 03, 2002 at 09:21 UTC | by fundflow |
Parse the phone bill given by Orange cellphone company (www.orange.net)
It shows the information per phone number (number of calls and text messages, and the cost/minute). I use it to check which of my friends costs me the most :)
I'm now in the UK and this is the bill they give here, but this might work in other countries. In case there's a problem with other countries, let me know.
|
Oracle DB & Server Backup on Mar 29, 2002 at 02:42 UTC | by samgold |
Script to backup 4 Oracle databases and then backup the server using ufsdump. It backs up 2 databases a day and then the other 2 the next day. This was written for a Sun Box backing up to a DLT tape drive. Look for places where lines will need to be edited noted by #!! If you have questions or comments please let me know. |
Length on Mar 14, 2002 at 07:46 UTC | by Juerd |
This code is very simple, and I think every experienced Perl coder can think of it. However, I use this twice a day, so it's useful to me. It might help out others, or at least make some of you toss away those old while(1) { print length(<>) - 1, "\n" } scripts that some people have. |
tree.pl - kinda like tree on Mar 04, 2002 at 20:42 UTC | by crazyinsomniac |
heard of the Perl Power Tools, well this one was a missing one (tree). This version builds a LoL, with the first element in each list being the directory name. Output of the real tree utility looks like
F:\DEV\FILE_TREE\B
| file
|
\---c
| file
|
\---d
file
Improvements are welcome. |
(code) Ignore The Man Behind The system(rsync) Curtain on Mar 01, 2002 at 03:33 UTC | by ybiC |
Wrapper for rsync, intended for backing up data betwixt a client (Cygwin on Win32|Linux) and a server (Linux).
I looked into File::Rsync module, but it also employs 'exec' calls. So for now will stick with system call
to rsync, for simplicity and for standard-distribution-
modules only.
From a Perlish standpoint, this has been a refresher in the use of 'tee' (props to tye and Zaxo), another chance to use the nifty Getopt::Long and swell Pod::Usage modules, use timestamps for logfile names, detect OS type with $^O, use the keen-o filetest operators, sprintf for human-readable date+time, and to write another silly Perl script that's 50% pod.
As always, comments and criticism are wildly welcomed.
Update:
Experimenting with File::Rsync to ease parsing-on-Cygwin woes.
Add parsing code by Zaxo
Minor tweaks to pod
Present runtime in appropriate units (sec, min, hour...)
Handle backups of rsync modules *to* rsync server in addition to *from*
|
Searching for 'chunks' of data in very large files on Feb 28, 2002 at 18:21 UTC | by Ovid |
Recently, in the Perl beginners list, someone had a bit of a quandary. They were reading a 600 MB file and needed to find a search term, grab from the file 200 bytes of data both before and after this term and then search for another term within that 'chunk' of data.
I thought this was such a fun problem that I went ahead and wrote the program for this person (yeah, I know, I gave him a fish). This is deliberately overcommented in case the person did not know a lot of Perl. The basic idea is to search the file and return 400 byte 'chunks' in an array. |
Virus Tester on Feb 24, 2002 at 04:40 UTC | by ProgrammingAce |
This little perl script will see if your antivirus software is paying attention by writing a file called aseicar.com to your C: drive. This file contains a small string that is harmless. Your antivirus program should register the new file as the "Test-String Virus". I repeat, THIS FILE IS HARMLESS! |
flexible find on Jan 27, 2002 at 17:17 UTC | by axelrose |
Here is a little script I always wanted to have:
a "find" script which
- - runs on Macs (and *nix, Windows too)
- - creates objects while running
- - has an understandable "-prune" option
- - uses Perl regex for filtering
This should make it easy to extend the idea for your purpose.
The example below outputs found files sorted by modification time.
You can change it to list directories sorted by size or by mtime
by providing a directory callback function:
my $dirfunc = sub { push @dirs, File->new( name => $_[0] };
WalkTree::walktree( $mystartdir, undef, $dirfunc, undef );
I'm happy about your comments.
Best regards,
Axel.
|
mrtg.errorcap - reformat MRTG errors on Jan 25, 2002 at 16:36 UTC | by grinder |
When running MRTG from cron, if a device fails to reply within the allotted time, iit spits out a voluminous error message. If many devices are down (because a nearby gateway has fallen off the net), the exact nature of the problem can be difficult to see, as there is too much output to wade through.
This script takes the output, reformats and summarises it and sends it to the relevant authorities.
|
Regex Checker on Jan 08, 2002 at 18:45 UTC | by mrbbking |
2002-01-09 Update: Note to Posterity - there are better ways than this.
After posting this little thing (which truly was helpful for some),
two fine folks were kind enough to point out a
better way of understanding your regex and
a fine online reference - neither of which I had managed to locate on my own.
Many thanks to japhy for creating them and to tilly and crazyinsomniac for pointing them out.
If you're here looking to learn more about regular expressions, you'll do well to follow those links.
A new guy here was confused by the difference between
"capturing" with parens and "grouping" with brackets. I gave him this function to help show the difference, but it's useful for general testing of regular expressions.
The thing to remember is that it's easier to write a regex
that matches what you're looking for than it is to write
one that also doesn't match what you're not looking for.
Note: I did not use the 'quote regex' qr//; because that makes print display the regex in a way that differs from what the user typed. My goal here is clarity.
Further Reading (in increasing order of difficulty):
|
PingSweep on Jan 04, 2002 at 03:08 UTC | by sifukurt |
I've been working a lot with Perl/Tk recently (mostly just for fun), and I needed a script that would verify that a list of servers were active. So I combined the two things and ended up with PingSweep. It reads the servers out of an XML file (default name of the XML file is "hostdata.xml"), and the specifications for the XML file are included in the help text. It defaults to pinging the servers 20 times every 90 seconds, and sends an email to a specified address if any of the servers fail to respond. All of those options can be modified from the command line via Getopt::Long.
Hopefully you'll find it useful. As always, feedback is welcome. |
(Fake) CVS Import Utility on Dec 18, 2001 at 05:26 UTC | by vladb |
Does exactly what a 'cvs import' command would do with the only exception that a version of working source files will not be generated inside an active CVS repository. For that matter, it also doesn't require you to execute the 'cvs checkout' command to retreave source code from
the repository to start working on. This, in turn, means that you may start archiving versions of your current source files immediately after
running this script inside the directory containing those source files. |
Verify Your FreeBSD CD on Dec 10, 2001 at 14:36 UTC | by crazyinsomniac |
If you just downloaded 4.4-install.iso also known as FreeBSD 4.4 ISO, and you "burned" it to a cd, and you wish to make sure that all the packages contained are as they should be (Debian has been really lame about this, I had at least 5 packages fail, even tough the "burn" was good, it turns out, just a shabby iso release)
perl verify_freebsd.pl -d D:/>burnt
WHAT?!?!: For some reason, in \XF86336\CHECKSUM.MD5
the line
MD5 (Servers) = edc0aef739c1907144838e6c18587e02
which apparently is the md5 sum for the "directory" \XF86336\Servers which you might be able to do in *nix, but on on winblow. |
Verify Your Debian Potatoe on Dec 09, 2001 at 16:04 UTC | by crazyinsomniac |
All your smurf are to "Verify Your Debian Potatoe" after baking smurf .iso
perl verify_potatoe.pl -d D:/>burnt
update: if you run accross "d41d8cd98f00b204e9800998ecf8427e" you've got yourself an empty file. Beware, some cd-r's play mind games |
CSV Database Validation on Nov 19, 2001 at 09:20 UTC | by Ovid |
Recently, for a personal project, I was using DBD::CSV to create a simple database. I was dismayed how little validation there was. The following program will allow you to create a CSV database and validate unique fields, foreign key constraints, and simple data types. You can also use this to validate data against regular expressions. Naturally, these are one-time validations.
Also included is support for validating an existing CSV database or CSV file. See POD for complete details.
Please let me know if you find any bugs in this code. Also, a code review would be appreciated :)
|
logrotate on Nov 12, 2001 at 19:23 UTC | by enaco |
This is a script for rotating logs.
It reads a configuration file that tells it what files to rotate, how long the rotate history is and if it exists, what program to send a HUP when rotatings its log.
This script was made primarly for learning perl but still a usefull pice of work.
The script is not quite done yet, i still got some bugs to fix, i release it early and hope for some input. |
spew - print out random characters on Nov 12, 2001 at 19:19 UTC | by grinder |
Print out random characters, suitable for making hard-to-guess
password, or manual IPSec authentification keys (which is why
I wrote this in the first place). |
Obscure on Oct 25, 2001 at 20:14 UTC | by JimE |
While contrary to the spirit of Perl, the harsh realities of the world sometimes make it desireable for your code to be less than totally open. Although it gets discussed from time to time in various forums, I've never actually found a tool to do this, so I wrote a 'perl code obscurer' for my current need that might be of some use to others. Does not go as far as the encrypt/decrypt model proposed by some, just file munging and var renaming to produce a distributable file that the interpreter can run. Discourages tampering but won't stop a determined reverse engineerer. More details in the pod in the file... |
makeperl on Oct 17, 2001 at 21:47 UTC | by Rich36 |
A very simple script, but one that I use all the time. I find myself writing a lot of small Perl scripts and I find this to be a convenient method for getting started, saving a little time, and imposing a standard coding structure.
makeperl creates a new file, writes a standard format/template for a Perl script, changes the permissions to be executable, and opens the file in an editor.
I also use this a lot for when I'm trying out code examples from books, online, etc.
This has only been successfully tested on *nix, but should work elsewhere. |
snapdiff -- Compare CPAN autobundle files on Oct 16, 2001 at 23:03 UTC | by Fletch |
CPAN.pm can create a bundle file which contains all of the modules
which are currently installed on a box with its autobundle
command. This program will compare two such snapshot files
against each other, noting what modules are in the first but not in the second (and vice versa)
as well as if there are differing versions of the same module.
Handy if you're trying to duplicate the configuration of one
box on another, or want to see what's changed over time if
you keep historical bundles.
|
Dump.pl on Oct 03, 2001 at 06:42 UTC | by hsmyers |
Nothing fancy, just a straight forward binary dump of a file in a formatted display to either STDOUT or filename. Particularly handy to compare what you think is in the file with what is actually in the file! hsm |
Simple Directory Mirror on Aug 31, 2001 at 21:59 UTC | by sifukurt |
This is a quickie script I wrote to backup files in my work directory to a backup directory. I thought it was handy, so I thought I'd post it here. |
Linux message log webifier on Aug 31, 2001 at 08:33 UTC | by hans_moleman |
This is a script I threw together to provide a web version of the Linux message log, indexed by service name. I'm a new perl coder so comments and especially suggestions would be much appreciated...
NOTE: This code has been modified as a result of the helpful suggestions given to me... ;)
09/02/2001 - heavier modifications now, I have put the file write stuff into a sub where it belongs and added code to read the logfile both by service and by date. By the way, this script should work for any syslog type log, I have tried it on /var/log/secure and it works like a charm...
09/03/2001 - added variables for HTML colour settings, kind of a poor man's style sheet ;).
|
GetUsers on Aug 30, 2001 at 23:15 UTC | by Jerry |
Client app for my GiveUsers server. Currently set up to get the password and shadow files (gpg encrypted before crossing the network!!), and changing all shells to /bin/false (except the first 20 lines). Be sure to create GPG keys and import them to the other box (as the user which will be running the GetUsers and GiveUsers scripts) |
Regex Tester on Aug 21, 2001 at 20:28 UTC | by George_Sherston |
A cgi that lets you try out regular expressions and see the results in an ergonomic way. No rocket science, but a thing I, who am fairly new to regexes, have found very useful for learning how they work, and coming up with the right one to do what I want. Some people can work it all out in their heads - I find trial and error indispensable!
The code is below, in case anybody wants to put it on his or her own machine, or check to make sure it's doing what it says on the box. But if you just want to use it, then please feel free to click here
§ George Sherston
PS - Alright already, I know all my variables are global and I didn't use straitjacket ... it's For Home Use!
PPS - Having said that, I would, of course, welcome style comments and suggestions for how to make it slicker. Every day and in every way I am getting better and better. |
NotSoLong.pl on Aug 12, 2001 at 03:53 UTC | by ichimunki |
|
Code::Police on Aug 01, 2001 at 19:41 UTC | by Ovid |
This is the Code::Police module. Provide this module to programmers who fail to use strict and most of their coding errors will be instantly eliminated. |
cols2lines.pl on Jul 31, 2001 at 19:41 UTC | by tfrayner |
Time once again to reinvent the wheel, I suspect. I wrote this a couple of years back as an exercise in perl. Specifically, a friend of mine was wanting to manipulate large (>10GB) tables of data. Part of his analysis involved transposing a table such that the columns became lines and vice versa. As a result I wrote this rather convoluted script. It only loads one line of the table into memory at a time, rather than the whole table.
The other points of this exercise were to make the code as well-structured (whatever that means :-P) and as user-friendly as possible. I imagine it's possible to load the essential functionality into a single line, but that wasn't my aim here.
Of course, I imagine there's a perfectly good CPAN module out there which would do this much better than this :-)
Update: The original script opened and closed the input file with each column that it read. I've changed to the more efficient seek function suggested by RhetTbull. |
Perl Code Colorizer on Jul 31, 2001 at 11:29 UTC | by BrentDax |
Okay, this one is pretty scary. This script reads in a (simple) chunk of Perl code (using normal filter behavior) and spits out an HTML file with certain things colorized. (You can see the list in the setup of the %config hash at the top.) It's extremely regexp-heavy. It's also pretty easy to confuse.
Notable bugs:
1. In a line like m#regexp#; #comment, the comment won't be colorized. Sorry.
2. In a line like m{foo{1,2}bar}, the program will get confused and stop highlighting after the 2. I've really got to work on nesting...
For all that, however, there's a lot of cool things it /can/ do, like:
-recognizing and colorizing (most) heredocs
-colorizing statements like @{&{$foo{bar}}} nicely to show which curlies belong to which sigil
-actually working most of the time
Only the colors for sigils are well-thought-out--the rest were just temporary values I assigned on a whim.
Also note that this was a lot of monkeys and typewriters--I myself aren't quite sure how it all works correctly. Well, have fun with this chunk of code! |
Perl port of locate on Jul 19, 2001 at 18:41 UTC | by sifukurt |
Personally, on a Linux system, I use locate very regularly. The problem is that I have a couple Win32 systems that I use, and, of course, there isn't a tool like locate. So I wrote this. I use it quite a bit, so I thought I'd share it here in the hopes that others would find it useful. I tried to be as compliant with the GNU locate command as possible, plus I added a few simple features that I needed but weren't part of the GNU locate specifications.
There are a few variables that you'll want to alter to suit the needs of your system. Comments welcome. |
Generic Password Generator on Jun 28, 2001 at 15:35 UTC | by claree0 |
A simple (i.e. easily adapted by those with more specific requirements) password generator, which will generate passwords according to a template. Useful for environments where the use of a specific format of password is encouraged.
|
nugid - new user/group ids on Jun 25, 2001 at 13:28 UTC | by grinder |
When you have a large host with hundreds of users, managing file ownerships becomes hard. People come and go, but their files remain, and these files have to assigned to other users. By itself the chown(1) program is too severe, as it will operate on anything it can get its hands on. This can be very disruptive in a directory tree where files may be owned by dozens of individuals.
So I wrote a script that is a little more selective, in its own words, it will
Selectively modify user and group ownerships of files and
directories depending on current ownerships. All files that
match the given group id or user id will be changed to the
new specified id. Numeric and symbolic ids are recognised.
Comments, suggestions and criticisms welcomed. I think the script name sucks, but I can't think of a better one. |
Directory Tree Comparison Module (File::DiffTree) on Jun 23, 2001 at 00:23 UTC | by bikeNomad |
This is a package that I wrote after seeing some other scripts here that did similar things. This package allows the behavior on same/different files as well as comparison to be pluggable using CODE references. It may become a CPAN module if the response here is positive enough. An example program that uses it is at the end. |
Password Manager on Jun 12, 2001 at 18:12 UTC | by DaveRoberts |
A simple Tk solution to allow simple management and change of NT and Unix passwords. Designed to make management of many accounts (and passwords) a little easier.
This does not remember your passwords (yet) but provides a single interface that allows passwords to be managed. |
macls on Jun 06, 2001 at 22:00 UTC | by snafu |
Rev ver 2.5:
This script is intended to get the mtime, atime, ctime information from a file or a list of files in a given directory. This is helpful to do basic file security auditing on your system. Read the comments in the script to get usage.
Updates: Rev ver 2.5
Added CLI parsing and POD. Now in github here
Updates: Rev ver 2.1
Added locale tz tweak.
Added symbolic permissions modes to output. Many thanks to pwalker@pwccanada.com @ http://php.ca/manual/ru/function.fileperms.php. He doesn't know he helped but his code was invaluable to the symbolic mode code I added to this script.
Finally made a few changes that were suggested from Jeffa. Made other minor (beautification) changes.
General clean up.
Syntax changes that I felt made the script easier to read and more maintainable. Added GPL comments at top with (c) 2001. Subversion tags added for info only.
If there are tweaks performed on the script I would sincerely appreciate an email with a copy of the script with your changes.
Tested successfully on: Linux, Solaris, FreeBSD
Comments would be appreciated.
|
(code) Yet Another Gzip Tarball Script on Jun 04, 2001 at 03:38 UTC | by ybiC |
I wrote this ditty to automate file copies, while retaining last-modified timestamps.
- Backup system configs, web directories, and perl scripts on 4 computers.
- Make it easy to keep perl scripts synchronized across the same 4 PCs.
Create gzipped tarball of all files in specified directories. Status and error messages written to console and logfile. Selectable compression level, recursion(y/n), log and dest files via commandline switches. Tested with Perl5.00503/Debian2.2r3, ActivePerl5.6/Win2k, Perl5.6.1/Cygwin/Win2k.
Sample run logfile at tail of pod. Critique, corrections and comments wildly welcomed.
Thanks to Vynce, mlong, bikeNomad, zdog, Beatnik, clintp, Petruchio and DrZaius for suggestions, tips and pointers. Oh yeah, and some guy named vroom, too.
Latest updates 2001-06-05 14:25 CDT
Correction:
Our very own bikeNomad wrote Archive::Zip, not Archive::Tar.
|
Perl Tags generator on May 25, 2001 at 23:19 UTC | by bikeNomad |
Perl editor tags generator (like ctags) that uses the debugger hooks to avoid parsing Perl itself.
update: avoid getting into long loops when the line number happens to be 65535 because of (apparently) a Perl bug |
Work Backup on May 16, 2001 at 01:26 UTC | by John M. Dlugosz |
This is a Perl program to perform daily backups of interesting "work" files in an intelligent manner. Developed under Win32, should be OK on all platforms. |
JournalH.pl on May 15, 2001 at 21:54 UTC | by JSchmitz |
Journal maker for the lazy - more time to drink Whoop-ass and listen to Slayer. |
Unlinker.pl on May 15, 2001 at 00:03 UTC | by JSchmitz |
Uses Perl's unlink to auto-delete numerically named files. Needed this here at work.
|
ping for ppt on Apr 23, 2001 at 00:29 UTC | by idnopheq |
ping -- send ICMP ECHO_REQUEST packets to network hosts
ping tests whether a remote host can be reached from your computer.
This simple function is extremely useful as the first step in testing
network connections, ping sends a packet to the destination host
with a timestamp. The destination host sends the packet back.
ping calculates the time difference and displays the data.
This test is independent of any application in which the original
problem may have been detected. ping allows you to determine whether further
testing should be directed toward the network connection or the
application. If ping shows that packets can travel to the remote
system and back, the isse may be application related. If packets can't
make the round trip, the network may be at fault. Test further.
Added actual ~pinging~ sound via the system bell, per the cononical naval usage of the term and the jargon file's entry (see the -a -A options).
The funniest use of `ping' to date was described in January 1991 by Steve Hayman on the Usenet group comp.sys.next. He was trying to isolate a faulty cable segment on a TCP/IP Ethernet hooked up to a NeXT machine, and got tired of having to run back to his console after each cabling tweak to see if the ping packets were getting through. So he used the sound-recording feature on the NeXT, then wrote a script that repeatedly invoked `ping(8)', listened for an echo, and played back the recording on each returned packet. Result? A program that caused the machine to repeat, over and over, "Ping ... ping ... ping ..." as long as the network was up. He turned the volume to maximum, ferreted through the building with one ear cocked, and found a faulty tee connector in no time.
Requires up-to-date Net::Ping and Time::HiRes. Many thanks to Abigail's neat warn and die subs from the http://language.perl.com/ppt site!
Two items anyone can help with:
- Sig{INT} control for Win32 - Term::ReadKey (which I could not get to work right for this one script)?
- TTL on various systems, like Win32. I know the default from the RFC, but the reallity and how to automagically query?
UPDATE: ACK! I posted an old version w/o the ping sound! Here it is.
UPDATE 1: submitted to PPT for inclusion |
md5sum for PPT on Apr 13, 2001 at 01:35 UTC | by idnopheq |
md5sum computes a 128-bit checksum (or fingerprint or message-digest) for each specified file. If a file is specified as `-' or if no files are given md5sum computes the checksum for the standard input. md5sum can also determine whether a file and checksum are consistent.
For each file, `md5sum' outputs the MD5 checksum, a flag indicating a binary or text input file, and the filename. If file is omitted or specified as `-', standard input is read.
I added functionallity from BSD md5 and some other md5sum ports (MS-DOS, etc.) for compatibility. FSF md5sum is the default. |
TAR.pl on Apr 11, 2001 at 20:06 UTC | by $code or die |
I wanted to find a simple command line TAR\GZ program for Windows to GZip my modules for uploading to CPAN. But I couldn't find one I liked.
Of course there is always cygwin which I urge everyone to download who use "explorer.exe" as a shell. It's only a small download.
However, this fitted my needs. Please let me know any comments or improvements.
Update: You might also want to check out btrott's script which does the same thing but also stores the path. I didn't see this before I posted mine. |
Gantt Diagrams on Apr 02, 2001 at 10:56 UTC | by larsen |
Simple module to produce Gantt diagram from
XML project descriptions.
Or, what Perl can do to help you if you're planning to conquer the world?. |
detect sneaky processes which modify their process name. on Mar 26, 2001 at 06:11 UTC | by rpc |
This script walks through each PID in /proc and performs several checks to determine whether or not a process has modified its process name. It's trivial for a program to mung its process name and fool utilities such as 'ps'. There's many malicious tools available which try to hide their pressence, using more common process names like 'pine'. However, if the binary itself was not invoked with this name, it's possible to detect using the /proc interface. |
pinger - ping a range of hosts on Mar 23, 2001 at 19:19 UTC | by grinder |
A little script that provides an easy way of pinging all
the hosts from, e.g. 192.168.0.1 to 192.168.0.100. The
output can be tailored in various ways. |
vargrep on Mar 13, 2001 at 01:21 UTC | by japhy |
A raw attempt at scanning a Perl program for specific variables. Results are usually good. |
Week Partitioner on Mar 09, 2001 at 02:02 UTC | by japhy |
This program takes a month and a year, and returns output like cal, for the days of the month between Monday and Friday. |
chmug on Mar 07, 2001 at 04:01 UTC | by tye |
A perl-only replacement for chmod, chown, and chgrp that I found very convenient when I was a Unix sys admin. It lets you change the mode, owning user, and owning group all at once (or any combinations thereof).
This is some pretty old code (last updated in 1995) but it doesn't look horrendous so I thought I'd add it to the archive. |
Tcl/Tk to Perl Tk on Feb 27, 2001 at 06:06 UTC | by strredwolf |
A rather crude converter, but if you design it with wish,
and then change it over, cuts things down a bit.
|
csv2png.pl - Line Graphs from a CSV file on Feb 16, 2001 at 23:14 UTC | by clemburg |
This is a small script that takes CSV data as input
and generates a PNG file displaying a line graph as output.
Basically, each column in the CSV file is a data series that
will be displayed as a line graph. The first series labels the X axis. |
Find and convert all pod to html on Jan 30, 2001 at 01:33 UTC | by ryddler |
Searches the site and lib directories on an ActiveState install for any POD that isn't in HTML format and converts it. Rebuilds HTML TOC after conversion. A logfile is kept to track additions. |
shrink.pl - Scales down images on Jan 21, 2001 at 23:28 UTC | by Vortacist |
This is my first major perlscript (with help from Falkkin)--it scales down the size of images in a specified directory and all of its subdirectories. This is not a compression algorithm--it simply resizes the images based on command-line options. The user may specify size (as "xx%" or a number of pixels), starting directory, and which types of image files to resize. The user is required to specify a size; if none is given, the online help message is printed. Please see this message for more info.
I tend to do a lot of image-resizing for CD-ROM scrapbooks and thumbnails and thought other people might find this script useful for similar tasks.
I would appreciate any suggestions on how to make this script more efficient, and I'd also like to know if the help text is clear enough. Thanks!
Update: Changed code as per merlyn's suggestion below, with one slight difference (see my reply). |
(self-deprecated) slack updater on Jan 12, 2001 at 05:22 UTC | by mwp |
Deprecated: If you're looking for a script to do
this for you, I highly recommend autoslack, written by
David Cantrell (of the Slackware Team). It can be found in
the unsupported directory on any slackware mirror.
A relatively simple script that I'm writing which scans a
local, uninstalled copy of Slackware 7.1 and updates the
packages from a slackware-current mirror. Very rough around
the edges, so be gentle. Gives you the option of installing
downloaded packages but is not integrated with /var/adm/packages
info in this version. Useful to a point, mostly written for
myself only because Patrick Volkerdeing & Co. are writing a
script named 'autoslack' (in Perl!) with the same exact
functionality, 'cept probably better! I was just impatient.
Good example of Digest::MD5, following our recent
discussions!
|
Sun Fingerprint on Jan 05, 2001 at 08:37 UTC | by a |
Uses Sun's on-line md5 database to validate your Solaris
system files (exe and libs). As in, checking for
trojans or just getting
the proper version numbers. You could run it in a pipe yibC
w/ find or ls, e.g.
ls /bin | sunfingerprint -
has various levels of output (even the whole, original html)
so you could do various things to see what it
complains/notices.
|
Find Duplicate Files on Jan 04, 2001 at 23:08 UTC | by salvadors |
As my original Find Duplicate Files script was so popular I decided to take the advice of turning it into a module. Here's the initial verison of it. I'd appreciate feedback on ways to provide a nicer, more useful, interface than just returning a HoL.
Thanks,
Tony |
mkpkg on Jan 03, 2001 at 19:25 UTC | by kschwab |
A wrapper script for pkgmk on Solaris to create Solaris
software packages. (pkgmk and pkgtrans are overly complicated...this simplifies things for me) |
switcher on Dec 31, 2000 at 01:18 UTC | by $CBAS |
woohoo! my first code on PM! :-)
This little thing sweeps through directories to change the extensions of files (I use it to rename .MP3 files to .mp3 ... damn AltoMP3!)
|
tidyhome on Dec 17, 2000 at 01:02 UTC | by billysara |
A little script to tidy users home areas by moving files
from ~/ to subdirectories related to filetype. Written so
I didn't need to spend my time tidying up all those .tar.gz
files I accumulate from freshmeat...
|
col for PPT on Nov 30, 2000 at 04:06 UTC | by chipmunk |
This script is a Perl implementation of the Unix col utility, which filters reverse line feeds from input. (More details: col manpage) It was written for the Perl Power Tools project. Sadly, the PPT webpages have not been updated since I submitted this script. So, I'm posting it here, because I don't want the script to go to waste.
When I wrote this script, I had access to two different implementations of col, one on BSD and the other on IRIX. I added new command-line options so that, in most cases where the BSD and IRIX implementations behaved differently, my own implementation could be instructed to emulate either.
Although there are no known bugs in the script, there may be bugs that I am not aware of. Please let me me know if you find any. Comments and suggestions are welcome as well.
P.S. Also, if anyone has a use for this script, please let me know. I'd never even heard of col until I saw it listed in the PPT!
|
onlyone on Oct 31, 2000 at 01:58 UTC | by BoredByPolitics |
This is my second perl program, written to fulfill a need at work. I was wondering if someone with more experience could critique it, there being no other perl programmers in my office :-)
It's aim is to be run as part of the .bash_profile, to remove any prior D3 program running from the user's IP, to solve the problem of rebooted clients at a customer of ours (stopping them rebooting wasn't an option).
Oh, almost forgot, it's also supposed to run suid root, as the users often use different login names from the same workstation. |
pcal on Oct 22, 2000 at 05:45 UTC | by japhy |
pcal displays calendar information for the next X
months (it defaults to just the current month). With just a
little modification, it could be made to act exactly like the
cal utility found on most Unix machines. |
PMail on Oct 02, 2000 at 17:47 UTC | by le |
This script takes a Unix Mailbox (in so-called Berkeley format),
and generates several HTML files from it. (Just as many
Mail-to-HTML programs do.)
The output looks almost like the one from hypermail.
Yeah, I know, you might say "Come on, this is mail2web no. 2365, who needs it?"...
well I just did it for learning purposes. Maybe you can learn, too. |
syscheck: check system files for changes on Sep 11, 2000 at 02:05 UTC | by cianoz |
this program checks files and directories in your system
reporting if they were modified (by checking an md5 sum)
created or deleted since last time you initialized it.
I use it to check my /etc /sbin /lib /bin /usr/bin and /usr/sbin
for changes i didn't made (backdoors?)
you can run it from a cron job or manually
(it is safe to store a copy of the checksum database outside the system)
it takes about 2 minutes to scan all rilevant files on my systems.
although it seems to work for me it is just a quick hack,
i would appreciate some hints to make it better.
|
expirescore.pl on Sep 02, 2000 at 02:29 UTC | by le |
I use the slrn newsreader and my scorefile got to
big (it was over 2 MB), with a lot of old scores,
because I didn't set expiration
times for the scores, so I hacked this script, that reads
in the scorefile and lets me expire scores
(based on Time::ParseDate features like older than
"5 days", "3 months", "2 years"), and writes back the scorefile.
Currently, it only runs interactive, but maybe I (or you)
will add the needed features to run without user
interaction (e.g. for cron scripting).
|
Pathfinder - find duplicate (shadowed) programs in your PATH on Aug 26, 2000 at 09:12 UTC | by merlyn |
Run this program (no arguments) and see which items in your PATH environment setting are shadowing later programs of the same name. This is an
indication that you might get failures running the scripts of others, or perhaps if
you ever rearrange your PATH. |
Find out where symlinks point on Aug 09, 2000 at 15:31 UTC | by merlyn |
Walks through one or more directories specified on command line, and fully expands any symbolic links within those directories to their real locations, taking into consideration all the relative and absolute symlinks that occur, recursively.
Originally written for a
Performance Computing/SysAdmin magazine Perl column. Go see there for a line by line description. |
wanka on Aug 08, 2000 at 23:57 UTC | by turnstep |
Just a simple hex editor-type program. I actually wrote this
back in 1996, so go easy on it! :) I cleaned it up a little to
make it strict-compliant, but other than that, it is pretty much
the same. I used it a lot when I was learning about how
gif files are contructed. Good for looking at files byte by byte.
I have no idea why it was named wanka but it has stuck. :)
|
ppm.xml made more human-friendly on Aug 03, 2000 at 11:52 UTC | by Intrepid |
Please use caution!
I have found out from external sources that in XML, whitespace outside
of tags can be significant, and that although the present version of
PPM does not mind how my script changes <CITE>ppm.xml</CITE>, a future
version may. It might be better, therefore, not to use this script.
The Perl Package Manager (with ActivePerl) writes to a file named
ppm.xml each time a module is installed using PPM. It doesn't add
nice whitespace to the file, however, resulting in a mess if one
ever needs to or wants to (out of curiosity) go in there and see
what has happened before. This little (slightly "anal" :)
utility script can be called from the very end of <CITE>PPM.bat</CITE>
(another file that comes standard with the installation of
ActivePerl) by adding the line:
CALL fixPPMXMLfile
to the file below the lines that match this:
__END__
:endofperl
at the end of the batchfile. This kicks off your cleaner which
adds some whitespace to make more human-friendly a file which,
it is to be admitted, is ordinarily only machine-read. |
report ipchains DENY entries in /var/log/messages* files on Jul 28, 2000 at 07:07 UTC | by Anonymous Monk |
Creates and mails a report on ipchains DENY entries in the
/var/log/messages* files. The script remembers the last
entry from the last run so you only get new entries on the
next run. Add it to your crontab to generate periodic
reports. |
What's eating all your disk space? on Jul 12, 2000 at 03:25 UTC | by hawson |
I'm constantly having to clean out space on lots of computers, and looking at several screens of 'du' output hurts. So I wrote this little script to parse and format the output from 'du'. I know, I know, it's not strictly perl, but monks should be aware that there are thing that exist outside these cloistered walls.
N.B. Since this is meant to be used in a pipe, it's usually all on a single line, and without comments. |
Simple Calculator on Jun 29, 2000 at 01:20 UTC | by ncw |
A simple perl calculator for use on the command line. See code for instructions. |
ButtonFactory on Jun 28, 2000 at 16:40 UTC | by t0mas |
A package that creates custom png buttons.
|
UserId checker on Jun 10, 2000 at 04:58 UTC | by brick |
This program is meant to munge through N passwd files and
check for logins with multiple UIDs, UIDs with multiple
logins, and logins with a UID of zero (0) that are not root. |
NIST Atomic Clock Time on May 11, 2000 at 02:50 UTC | by reptile |
Uses LWP::UserAgent to get the current date and time (Eastern) from the NIST Atomic Clock website at www.time.gov
This code is public domain.
|
Cropping a postscript file on May 08, 2000 at 21:40 UTC | by ZZamboni |
This script allows you to select a portion of a postscript
file and crop it. The whole file is included in the result, but
the appropriate commands are included to crop it to the section
you select. Instructions for use are included in the script. |
DHCP and DNS Compare on Mar 30, 2000 at 20:14 UTC | by Hyler |
For Windows NT administrators. Compares DHCP server and DNS server info to see if they match reasonably. Requires NT Resource Kit installed. |
Google Spreadsheet Distributed Agent System on Sep 30, 2009 at 11:59 UTC | by dmlond |
This object sets up a system to allow a user to configure a set of scripts to use a single Google Spreadsheet as a control panel when running across many servers. See The RFC for more information. |
File Backup Utility on Jul 26, 2009 at 03:04 UTC | by misterMatt |
I wrote this utility to assist me in backing up files from my hard drive to an external drive. This is my first perl script!
I tested this on the latest activeperl installation on windows vista. This script requires File::Copy::Recursive.
If anyone has suggestions on how to improve this script either by code improvements, or feature additions post your ideas here! I would appreciate it. Sorry about the formatting. |
Simple r-copy style backup on Jul 07, 2009 at 16:30 UTC | by gman |
Simple script I used to back up my Laptop before lease exchange. Was not looking to reinvent the wheel, saw rcopy after I wrote this and rcopy requires rsync. I was running XP with activestate perl, so seemed just as easy to write something quick that would do the job.
update: Thanks graff for pointing out the error I had with mkpath
eval { mkpath (["$copy_to/$dir"]); };
die "Could not create Path $copy_to $dir $@" if($@);
|
mkdir_heute on May 25, 2009 at 21:14 UTC | by mamawe |
I use this script every day to create directories, have them organized, change between them and do not think to much about it.
I use it in a shell alias like this
alias cdheute='cd `mkdir_heute`'
Actually it used to be a module and the script but for simplicity and posting it here I have slurped the module into the script.
|
xcol on May 17, 2009 at 19:00 UTC | by sflitman |
A simple text-based column extractor for use in Unix pipelines |
Data Sampler (Extract sample from large text file) on Mar 06, 2009 at 21:12 UTC | by roboticus |
I frequently find the need to test programs on *real* data. But some of the datasets I have to deal with are rather ... large.
So this program lets me generate much smaller datasets to develop with.
Update: 20090306: I edited the first print statement to remove the trace information. |
Fix CPAN uploads for world writable files on Dec 21, 2008 at 22:33 UTC | by bart |
CPAN refuses to index tarballs with world writable files, a problem most commonly encountered by people creating CPAN distributions in Windows.
This script will fix the file modes of the files directly in the tarball. Run it right after you created the tarball, but before you upload to PAUSE.
Run with -h or --help to see allowable command line options. You need at least:
- A filename
- The -i option to replace the file or the -o option to save it as a new file
|
svn tk diff on Oct 25, 2008 at 10:36 UTC | by casiano |
I use "svn" command line for most of my work, but I
very much like graphic diff over "svn diff". This program
uses "svn export" to get a temporary copy of the file
and then "tkdiff" or whatever program you specify to
present the differences between your working
copy and the one in the repository |
cleanzip on Oct 20, 2008 at 05:33 UTC | by sflitman |
This is a quickie script to clean infected zip files using clamscan. It's definitely meant for unix/linux platform and expects clamscan 0.94 which with the indicated switches will print out each infected file on a line by itself with whatever virus it found. Note that this script will also delete zipped emails or mailboxes which clamscan identified as containing Phishing, etc., as it does not distinguish what type of unwanted byte sequences are reported by clamscan. This is to get around a deficiency in ClamAV noted by many that it does not identify the actual bad actor(s) in an archive, just that the archive as a whole is infected (and not multiply-infected, which is of course possible). |
column formatter on Oct 18, 2008 at 17:36 UTC | by sflitman |
print two or more files in nicely spaced columns |
dump binaries on Oct 18, 2008 at 17:29 UTC | by sflitman |
rolled my own version of od using my favorite tool, Perl! dump is aware of the terminal width using Term::Size if available |
splicepath on Oct 13, 2008 at 22:00 UTC | by casiano |
Like Perl splice operator but works on
PATH-like environment variables, i.e.
lists whose elements are separated by colons |
Remove an Installed Module on Oct 13, 2008 at 21:25 UTC | by casiano |
Remove a installed Perl distribution |
Using Linux::Inotify2 with POE on Oct 07, 2008 at 15:02 UTC | by jfroebe |
The examples in Linux::Inotify2 are for Event, Glib::IO and a manual loop. With the help of rcaputo, tye and Animator, I was able to get Linux::Inotify2 to work with POE. :) |
cvs wrapper with ssh-agent on Sep 19, 2008 at 15:02 UTC | by jacques |
I wrote this cvs wrapper because we were using cvs over ssh and I didn't want to keep logging in each time I invoked cvs. If you are doing the same thing, you might find it useful. One thing to note is that if someone has root, it is possible for them to get your password, since ssh-agent keeps it unencrypted in memory.
|
Trash temporary files on Aug 22, 2008 at 00:27 UTC | by bruno |
BACKGROUND
I am a little obsessive with the tidiness of my home folder. I cannot stand seeing loose, uncategorized files scattered everywhere (let alone in the Desktop -- the horror!). But more often than not, i run across files that do not really fit into any particular category, and I also do not know if I'll want to store them permanently or not.
So I have a ~/tmp folder in which I toss "the garbage" there, and every month or so I take a look at it and see what gets deleted and what gets "saved" and categorized.
THE SOLUTION
So I wrote this little script (I think of it as one of those robot-housewives from The Jetsons) that looks into my ~/tmp folder every hour and sends those files that are N days or older (measured as days upon arrival to the ~/tmp folder) to the trash. This gives you
a certain time window in which you can re-evaluate the usefulness of
that file and rescue it from oblivion or not.
|
Analyzing Apache memory usage on Jul 18, 2008 at 15:09 UTC | by Azhrarn |
After a day of trying to figure out why one of my web servers was locking up, I found that it was using a bit too much memory. But I had no idea how much, and Linux memory reporting is a bit arcane at best. Especially with something like apache + mod_perl/php using shared memory pools. So after some analysis, I came up with the included script.
|
distr - show distribution of column values on May 23, 2008 at 13:45 UTC | by Corion |
This program returns a quick tally of the different values for a column. My primary use for this program is to find out the most common date value in a file, to rename that file to that date.
It is also very convenient to use this program to get a quick overview over the distribution
of lengths, especially for numbers.
Currently, I'm "confident" that I'm picking the right value as the maximum value if the value occurs in at least 60% of the rows of the sample I'm taking. This has shown to be sufficient, but better would be an estimator that determined the size of the sample or expanded the sample as long as there was not enough confidence in the "modus". |
Regenerating grub.conf on Feb 28, 2008 at 21:00 UTC | by Tanktalus |
Automatically modifying grub.conf when inserting a new kernel or removing an old one on linux can be difficult. Even just modifying it in vi can be annoying. So, after getting some advice from other Linux users, I settled on the idea of regenerating it.
When I add a new kernel, I always rename it to /boot/kernel-$version-gentoo$release (where $release =~ /-r\d+/ or it's blank) to make it easy to view. So, using that personal convention, combined with a bit of Template Toolkit, I get this script. I think all my personal conventions are at the top of the script, or in the template at the bottom.
Hope this helps other linux users out there that may be building their own kernels (and thus modifying grub.conf themselves). |
pnt2bmp on Feb 10, 2008 at 17:14 UTC | by shotgunefx |
Found some of my old drawings I did on my Tandy 1000 using Tandy Deskmate circa 1989, nothing existed to view them or convert them. I was able to figure out the format (for my version of Deskmate anyway), this will generate a BMP from the PNT file. |
art2bmp on Feb 08, 2008 at 09:38 UTC | by shotgunefx |
Converts obscure ".art" files that were generated from "VGA Art Studio" into bmps. If you have a ".art" file, it is almost certainly a different format.
A quick hack, but it works. |
histogram on Jan 20, 2008 at 15:26 UTC | by polettix |
histogram [--usage] [--help] [--man] [--version]
histogram [--include-zero|--zero|-z] [--min-data-points|-m <num>]
[--noise|-N <threshold>] [--numeric|-n]
[--percentual|-p] [--step|-s <step>] [--tail|-t <length>]
# Generating histogram's data
shell$ grep 'interesting' file.txt | gawk '{print $3}' | histogram
# If you have numbers you can keep ordering a divide into "classes"
shell$ histogram --step 10 --numeric data-column.txt
This utility produces histograms out of input data. Every line in input
(except the optional newline) is regarded as an item, and a count
is kept for each different item. After the counting phase is terminated,
the label-count pairs are printed in output, ordered by count, descending.
This is the basic work mode.
If you happen to know that your inputs are numbers, and you care about
keeping them in order, you can specify --numeric|-n. This will make
sure that you'll have something resembling a distribution, and also that
all the gaps between will be filled (at integer intervals). If also
want 0 to be included, however far it may be, just pass option
--include-zero|--zero|-z.
Moreover, if your data are numeric, and you'd rather group them by
steps (e.g. 0-9, 10-19, ecc.) you can pass option --step|-s. Steps
start all from 0, and need not be integer ones.
|
Log Rotation on Nov 23, 2007 at 08:15 UTC | by Danikar |
A small script I wrote for work. My first script that is going to be used for anything real, so I am pretty nervous putting it into practice tomorrow.
I figured I would throw it up here and see if anyone had any suggestions and so forth.
The key thing is that so far it seems to work. Also, I know very little about log rotation, so hopefully I got the concept down heh.
I used a few `command` to archive old logs, I am sure there is a perl module out there for it, but time was for the essence in here. Any suggestions to switch that stuff out with I am open to.
Date time stamp isn't working correctly. =\ |
piglist on Nov 16, 2007 at 18:20 UTC | by hsinclai |
piglist makes a formatted, sorted list of subdirectory names and their sizes in K.
If the path names are longer than 44 characters, piglist reformats itself to output
a 132 character wide report.
piglist uses File::Find and the output of 'du', on unix/linux
|
hidiff - char highlight diff utility on Oct 31, 2007 at 00:53 UTC | by bsb |
Highlight differences between 2 files using
curses highlighting, a little like "watch -d". |
Cdrom data recovery script on Oct 29, 2007 at 04:55 UTC | by bitshiftleft |
If you are something of a code collector(like code off this website), you are probably putting it on CD's. Under Win32 everytime you drag files to the CD window you create a new session on it, and doing so that new seesion is supposed to copy the session before it. As it happens it doesn't do it reliably. Have you ever noticed dissapearing files you swore you put on it ? They are still there in the previous sessions. You only get to view the last session with the Windows Explorer. I bought a used copy of Kaspersky's book($5 with CD) "CD Cracking Uncovered".
I used two utilities from it to recover two of my CD's.
The last chapter deals with CD recovery . These utilities read the CD at the raw sector level. The script below uses the SPTI interface(Admin) , but can easily be changed to the ASPI interface(non-Admin). Yes, there are utilities out there you can buy,
but when you realize that recovery is an art(one size does not fit all), its nice to have a freebie you can modify to your own needs that comes with source code. |
Maple worksheet (lists) to XLS converter on Oct 10, 2007 at 13:16 UTC | by mwah |
This script reads a (exported to plain text => .txt)
Maple worksheet (.txt), extracts all 2D number lists
(Arrays [x,y]) of the form
'list:=[[ ...'
and
'list := [ ...'
and writes them into an excel worksheet
with column headings named after the lists.
If applied to a unconverted worksheet (.mw)
only input lists are extracted (output lists
will be usually generated by Maple via evaluation
of functions or expressions.
|
GPLifier on Sep 28, 2007 at 17:26 UTC | by Minimiscience |
This script can be used to apply the GPLv3 copyright notices to a specified source file. Simply edit lines 10 & 11 to use your name and (if desired/needed) the path to your local copy of the GPL, and then run it with the source files as arguments. If a program consists of multiple source files, use the -n and -d flags to specify a name & description for the whole program. If you need to skip some number of lines in each file before inserting the notice, use the -h flag. To copy your local version of the GPL to a specified directory (default '.'), use the -c flag. |
podinherit - Imports pod from superclasses on Aug 20, 2007 at 18:34 UTC | by rvosa |
DESCRIPTION
When object-oriented perl classes use inheritance, child classes will have additional methods not immediately apparent to users unfamiliar to effectively
navigating perldoc and inheritance trees. For example, IO::File inherits from IO::Handle, and so an IO::File object "does" anything an IO::Handle
object does.
Novice users are sometimes confused by this, and think that APIs are more limited than they really are (on a personal note: I found this to be the case when bug reports came in that some object no longer had the "set_name" method, when really I had re-factored it into a superclass).
This script remedies that by analyzing a class (provided on the command line), recursing up the class's inheritance
tree, collecting the methods in the superclasses and importing the pod for the methods in those superclasses. The resulting concatenated pod is written to STDOUT. That output can then be re-directed to a file, or formatted, e.g.
by doing:
podinherit -class Some::Class | pod2text | more
Module authors might use this script during the packaging of their release by doing something like:
podinherit -class Some::Class >> Some/Class.pm
IMPLEMENTATION
This script contains a subclass of Pod::Parser, which implements a stream parser for pod. The appropriate documentation for superclass methods is identified by the "command" method, which takes the following arguments:
my ( $parser, $command, $paragraph, $line_num ) = @_;
To recognize pod, the method name needs to be part of a $paragraph start token,
e.g. to find pod for 'method', permutations of the following will be recognized:
=item method
=head1 method()
=item method( $arg )
=item $obj->method( $arg )
Or, specifically, anything that matches:
/^(?:\$\w+->)?$method(?:\(|\b)/
I.e. an optional object reference with method arrow ($self->), a method name, and an optional opening parenthesis or token delimiter \b, to be matched against
the $paragraph argument to the C<command> call in subclasses of Pod::Parser. |
scissors - divide an image in sub-images for easy printing on Aug 02, 2007 at 14:45 UTC | by polettix |
This script helps in dividing an image into blocks that can be easily printed
on a normal printer instead of an A0 plotter. Input images are divided into
tiles whose dimensions can be established quite easily and flexibly.
You can access the full documentation using the --man option. |
join - join two files according to a common key on Jul 12, 2007 at 15:16 UTC | by Corion |
A counterpart to part, it allows you to join two files side by side according to common values. This is similar to the join UNIX command except that the join command expects the input files to be sorted according to the keys, while this program will slurp the second file into a hash and then output the result according to the order of the first file.
Optionally (and untested) it can use a tied hash as on-disk storage in the case that the storage for the files is larger than the available RAM. |
File splitting script on Apr 21, 2007 at 13:12 UTC | by Alien |
A simple script that can be used to split files. |
DeathClock on Feb 21, 2007 at 21:48 UTC | by bpoag |
DeathClock attempts to determine how much time remains before a given filesystem has zero free space remaining, based on how quickly existing storage is being utilized. It will send a panic message via email to one or more recipients depending upon how confident it is that death (zero free space in the filesystem) is imminent. DeathClock can be used as a monitoring tool as well, spitting out predictions of woeful and untimely filesystem demise at user-specified intervals. It can either be run as a one-time gauge, or set to monitor a filesystem constantly. DeathClock is a morbid script that lives a mostly solitary life, with the possible exception of his partially mummified mother, in a gothic-inspired home on a hill overlooking a motel. It enjoys taxidermy, quiet dinners with the motel guests, and attacking unsuspecting customers while they shower.
Example output:
# /usr/local/bin/deathclock.pl /prod 37 1 1 0 foo@bar.com
DeathClock: Starting up..
DeathClock: Collecting 37 seconds of growth information for /prod. Please wait......................................
DeathClock: 85094.79 MB remaining in /prod. Estimated time of death: 7w 5d 9h 56m 9s.
|
part - split up files according to column value on Feb 07, 2007 at 09:44 UTC | by Corion |
I often have to split up text files for consumption in Excel. A convenient way of splitting up a file meaningfully is to split it up by the value of a column. The program does this by accumulating the input into a hash of arrays keyed by the column value.
There is an awk oneliner (I'm told) that circumvents the memory limitations this program encounters:
awk -F '{ print $0 > $3 }' FILES
If you want something like this program in a module, see List::Part
Update: Also see join - join two files according to a common key. If you need one, you'll likely need the other too.
Update: v0.03 now can part according to more than one column.
Update: v0.04 now can output multiple header lines into every file.
Update: v0.06 fixes two errors: The column at the end of each line couldn't be used (well) as the key column. Header lines are now actually printed.
Update: Now there also is a Github repository for the case that you want to submit a patch.
Update: The code is now also on CPAN as App::part. |
Excel2Text on Dec 18, 2006 at 21:58 UTC | by stonecolddevin |
Simple conversion script for Excel to a pipe ("|") delimited text file. Takes two arguments, the filename (can be relative) and the directory to save the text file to. |
venn-list: produce union of histograms on Dec 17, 2006 at 03:19 UTC | by graff |
I needed this in order to assemble a word counts from 8 different sources of text, keeping track of which words came from which sources, and what the overall word frequencies were. So simple, yet so useful (the POD is longer than the code itself). |
MD5 Cracker on Nov 15, 2006 at 18:59 UTC | by Alien |
Simple script if you have a md5 hash and want to crack it ! |
lesmets.pl on Nov 10, 2006 at 08:14 UTC | by revence27 |
lesmets.pl helps you put accents on characters when you, like me, can't get a keyboard that allows you to. The POD even has instructions on how to extend Nautilus, the GNOME file manager, with lesmets.pl.
The name comes from French. Part of the French for "(if) you put them", and churned a bit. Get the POD documentation out with pod2html.
Um, also I have noticed it doesn't treat all STDINs the same way. But it should run good (it does on my Ubuntu). If it doesn't, patch it -- the code is the clearest you'll find in these Code Catacombs. |
Module Finder on Oct 18, 2006 at 15:10 UTC | by innominate |
I've been having problems with some cpan installs. You know the deal. They install and test correctly, but something just doesn't work right. *cough* Needs more testing! *cough*
So, in my haste to track down a few of these bugs, I wanted to know where the module(s) were stored. I threw together an extremely simple, but rather useful little script. (It was originally a one-liner, hence the anon sub and the speedy ternary conditional.)
It's definately not fancy, but it does its job gracefully.
IMO It's pretty self explanitory, but the jist is that it does a simple ignore-case regex over all dirs and subdirs in @INC. When it gets a hit, it spits it out.
(Edited to clean up formatting!) |
linked-port: find given linked libraries ("shared objects") in FreeBSD Ports on Oct 14, 2006 at 07:34 UTC | by parv |
This is a preliminary version -- with/ output via Data::Dumper
and lacking POD (: "it's all in code", see region around
GetOptions()) -- to find linked libraries in files related to
FreeBSD ports.
This came about due to recent OpenSSL security advisories,
necessitating rebuild of ports which were linked to old libraries.
Dmitry Marakasov in message
<20060907181108.GB90551@hades.panopticon> on freebsd-ports
mailing posted ...
for port in `pkg_info -oaq`; do
grep OPENSSL /usr/ports/$port/Makefile >/dev/null &&
echo $port;
done
... which seemed not very reliable as that would miss any port which
does not have "OPENSSL" in its Makefile. "security/nss" is such a port
used by Firefox. So, I decided to just use ldd(1) directly on
the files installed ...
# listpkg (used in filepkg): http://www103.pair.com/parv/comp/src/per
+l/listpkg-0.22
# filepkg: http://www103.pair.com/parv/comp/src/sh/filepkg
filepkg . | egrep '^/.*(bin|libexec)' \
| xargs -I % ldd % 2>/dev/null | less -p'(crypto|ssl)'
... output of which was rather tiresome to search through, and that
was enough to open up my $EDITOR & flex some perly muscle.
|
Move/merge directories on Sep 14, 2006 at 23:15 UTC | by diotalevi |
This allows you to merge two identical-ish directory trees. It won't overwrite any files if there's a conflict. |
diotalevi's grep on Sep 14, 2006 at 22:29 UTC | by diotalevi |
This grep is much like everyone else's perl reimplementation of grep. It's only distinguishing features are automatically looking inside bzip2, gzip, zip, and tar files. It borrows the pretty formatting used by petdance in ack. This started life as an improved version of the grep that comes with the Solaris which isn't recursive. |
cutf - cut by field name on Sep 14, 2006 at 22:23 UTC | by diotalevi |
Print selected parts of lines from each FILE to standard output. Selects parts by field name unlike /usr/bin/cut which uses column numbers. |
cksum contents of a tarball on Sep 14, 2006 at 22:17 UTC | by diotalevi |
Produces cksum info on the contents of a tarball while leaving the minimum files extracted at any given moment. |
uhead: "head -c" for utf8 data on Sep 01, 2006 at 03:25 UTC | by graff |
This simple command-line utility does for utf8 text data what GNU "head -c N" does for ASCII data: print just the first N characters of files (or STDIN). Since Perl's built-in "read" function is able to read characters (rather than just bytes), this a pretty trivial exercise. But I wanted to post it anyway, because it's a nice demonstration of a fairly complex process (handling variable-width characters) being made really simple. |
Color diff for terminal on Aug 12, 2006 at 12:07 UTC | by polettix |
Update: there's a stripped down Perl 6 version, check it out!
This will help you browse diff-like outputs (either plain or unified) on the terminal, colorising differences. It will also be able to invoke other programs that produce diff-like output (like some commands for version control systems).
The approach is different from other colorising script-s:
- I'm not using Algorithm::Diff, and
- I'm using terminal colorising capabilities, not HTML
Regarding the first bullet, it's better than it may look like. To diff two files I'm either fork()-ing a diff process, or requesting you to do so, but this is intended because it allows to access (nearly) all options that diff has. Moreover, this script works with all diff-producing programs (if the format is compatible with diff, either in the "plain" style or in the "unified" one), while Algorithm::Diff would be restricted to the two-files-diffing world (more or less).
Regarding the second bullet, I needed something to quickly show me the differences on the terminal - just like diff (or cvs diff, or svk diff, or...) do, but with colors to make them outstand. The drawbacks are many, among which:
- you have to use a terminal that works with Term::ANSIColor
- it's mostly useless for very long diffs
Again, this was no problem for me because I have such a terminal and the diffs I check are usually less than a few (terminal) pages.
As a final note, if you have symbolic links you can get the most out of the script by:
- installing it as colordiff somewhere inside $PATH
- make a symbolic link to it named cdiff, again inside the $PATH
Invoking the script when it's called cdiff basically turns it into a drop-in replacement for diff(1), so that you can:shell$ cdiff -u file1 file2 # with colors, instead of
shell$ diff -u file1 file2 # without colors, or
shell$ diff -u file1 file2 | colordiff # wow, way too long!
I know that this name-based behaviour change is generally frowned upon, but I think that in this case the advantage is self-evident.
Be sure to check the documentation for hints about using this with version control systems, like CVS or SVK. I hope you'll find it useful! |
lspm — list names and descriptions of Perl modules in a directory on Jul 16, 2006 at 05:48 UTC | by Aristotle |
Remember pmdesc2 - lists modules with description? It’s a script that lists any or a subset of the modules you have installed, complete with the version number and a description, inspired by Tom Christiansen’s pmdesc, but without a number of its annoying flaws, with much higher speed and far cleaner code.
This time around, I added a bunch of options and DOS-newline translation to address problems brought up by Fritz Mehner. In the process, I also cleaned the code up even further and added POD and proper --help etc by way of the inimitable Pod::Usage.
Update 2006-07-16T11:03+0200: fixed a minor oopsie with --align-cont. |
Exchange to Postfix Firewall Configuration on Jun 29, 2006 at 16:00 UTC | by madbombX |
This script will pull all users SMTP addresses from your Active Directory (including primary and secondary email addresses) and list them in the format "user@example.com OK" which Postfix uses with relay_recipient_maps. It will also automatically create a mynetworks file, transport file, and relay_domains file to ensure all information is properly included. Additionally, if you have amavisd installed, you can specify whitelist and blacklist information for the information retrieved. You can also include senders and recipients not on either list and assign them a score. Don't forget to restart postfix after running this script.
Project Page: http://eric.lubow.org/projects/getcrr.php
There are links to the latest version of the file and the latest version of the config file located on the project page. |
perltoxmi on Jun 16, 2006 at 19:03 UTC | by g0n |
Rough and ready way to convert oo perl code to xmi for uml class diagrams, for import into Argo/Rose/etc etc. |
psh (perl testing shell) on May 11, 2006 at 18:16 UTC | by jettero |
I'm sorry to say, I've written another perl shell. This one is designed to help with the development of perl programs by giving you a place try things out.
I've been using it to support my own coding for quite some time now and a friend encouraged me to publish it. I wrote this one to compete with hilfe specifically, though python and ruby have similar devices.
UPDATE(2/14/08): This has evolved a bit since my original post May 11, 2006 at 14:16 EDT.
UPDATE(8/28/8): More evolution. There is now support for paging in less, shell forks, config editing, and assorted perldoc forks. |
script-starter on May 02, 2006 at 19:33 UTC | by polettix |
A template for creating new scripts, much in the Module::Starter spirit, together with a script that filters the template to actually create the new script. Ok, it's simpler than what I've described!
The script has documentation, but it's not based on the template - just because the template evolved with time (e.g. english translation).
Update: followed clever suggestions from chanio.
Update: followed clever suggestion from http://perlbuzz.com/2009/11/the-horrible-bug-your-command-line-perl-program-probably-has.html. |
md5sum check on Mar 23, 2006 at 05:57 UTC | by w3b |
When we have extraneous user in computer, we are in state of emergency. Bad guy can modify file like /etc/ssh/sshd_config or /etc/passwd... I don't want to check checksum every day, so i write script which do it for me :) In .sumlog we write checksum::patch to file, and that's all |
Splitted Zip on Mar 04, 2006 at 02:42 UTC | by polettix |
Sometimes email puts a hard limit to the size of the files we can send.
In these occasions, compressing comes handy, because it reduces the
size of the data, but it could not be sufficient. Many tools allow
the production of a splitted ZIP file, but this approach, while general,
requires a higher knowledge on the side of the recipient, that is
obliged to save all chunks in a directory. Many users simply don't
want to catch this simple concept, and insist on double-clicking in
the file they receive.
This is where split-zip.pl comes to the rescue. If it can. Its purpose
is to arrange the files to be sent in order to produce multiple ZIP
archives, each of which remains valid and self-contained. Thus, the
casual user double-clicking on it will be happy and will see some
of the files. Of course, this approach fails miserably if there is the
need to send a single, huge file - you're stuck to train your users
better in this case.
Note: I only tested it in a few cases, be sure to read the disclaimer at the end!!! |
Ppushd - emulate pushd and popd commands of the shell on Feb 28, 2006 at 19:32 UTC | by ambrus |
This script emulates the shell functions pushd, popd, and dirs. It stores the directory stacks in a data file ~/.ppushd.dat.
It needs some help from the shell because otherwise it won't be able to change the directory. Save the script with the name ppushd-bin somewhere in your path, make it executable, and add the following commands to your bashrc to be able to use it.
pdirs() { cd "`ppushd-bin dirs "$@"`"; }
pinit() { cd "`ppushd-bin init "$@"`"; }
ppushd() { cd "`ppushd-bin pushd "$@"`"; }
ppopd() { cd "`ppushd-bin popd "$@"`"; }
pinit
This script is dedicated to demerphq.
Update: it might be better to implement this fully as a shell script without perl.
Update 2016-01-06: See also "I'm trying to write a script that will change directory (or set a variable), but after the script finishes, I'm back where I started (or my variable isn't set)!" in Greg's bash FAQ |
jfind - java class search on Feb 10, 2006 at 21:15 UTC | by vladb |
Searches for classes in a set of jar files that match a given pattern.
Usage:
jfind -help -s <pattern> <jars ...>
Class <pattern> could contain /.
E.g.: javax/ejb/EJBObject
the script will convert all / to .
Usage Examples:
jfind digester *.jar
jfind -s digester *.jar
Search classpath...
jfind org.foo.Bar /usr/lib/foo.jar:/usr/lib/bar.jar
|
Xcalcfin on Jan 05, 2006 at 19:57 UTC | by smokemachine |
financery calculator |
Comment on Dec 28, 2005 at 18:13 UTC | by smokemachine |
Comment your code |
Saving digital camera photographs on Sep 30, 2005 at 23:37 UTC | by Hue-Bond |
I've been having compact flash corruption problems lately so I decided to keep my CF as empty as possible and format it regularly. I've prepared this little script that keeps an eye on the system log to see when I plug the camera and then moves the pictures to a safe place. It needs to run as root but the real work is made in a separate, unpriviledged child. This way, all I have to do is plug the camera, watch the syslog for a while and unplug it again.
I tried quiet hard not to reinvent any wheel. If I did, please pull out your flamethrower. |
expanded "ls" on Aug 10, 2005 at 03:14 UTC | by blahblahblah |
wrapper for "ls" that supplies default args and paged output
I got tired of typing "ls -halt ... | less -E". Now I just run this script, which I've named "sl".
Usage:
sl
sl -R
sl -R *.pl *.txt
|
Autoresponder automator on Jul 27, 2005 at 10:04 UTC | by jkva |
UPDATE: Working on quite a bit of improvements now; using the current version is .ill-advised.
UPDATE: Fixed. I wonder if I should let it go to the next users on file access errors or die() ...
UPDATE: b10m notified me of design flaws, I am fixing it now. Stay tuned.
UPDATE: Added status message when successful copy of .forward file has been achieved.
UPDATE: Fixed a bug and added comments.
This script generates a .forward Exim filter, plus the needed .vacation.msg reply message. Those two files are standard, the .vacation.msg is changed to contain the start-and end date, so that the person recieving the reply knows when the user will be back.
I could've done it by hand, but I thought writing a script would be good exercise.
Any hints/advice/notices of bad style or simply horrid code are greatly appreciated. Criticism == learning opportunity.
It's the first script I've written in Unix (or in Vim, for that matter). Future additions include reading the user data out of a file, and removing itself from a crontab when the "last" date has been passed.
I'm thinking of reading the vacation dates out of the MySQL database that the company web-based calendar uses... |
monitor suid and world writtable files on Jul 18, 2005 at 14:44 UTC | by Anonymous Monk |
need to write a script to scan system for new suid and world-writtable files, send email about the scan result if discover one or more. It skips /home and NFS mounted directories.
the NFS mount skipping part is for solaris only. |
Transposer on Jul 13, 2005 at 02:47 UTC | by Pied |
I sometimes miss Ocaml while parsing lists...
Yesterday, I needed something that would take a list of tuples and give me back the list of all the nth elements of each tuple.
Here we go!
Update: changed the name, as graff showed out it was just a kind of matrix transposer in fact :) |
Random Password Generator on Jul 01, 2005 at 07:43 UTC | by satz |
This script will generate a 9 character random password, consisting of 3 uppercase characters, 3 lowercase characters & 3 numbers. |
Color Space Converter on Jun 13, 2005 at 15:11 UTC | by js29a |
Converts between HSL and RGB, between RGB and HSL,
CMY and RGB, XYZ and RGB.
Three elements array or ref to hash input and output.
Input and output values are normalized to 0.0 - 1.0 range. Input variables are asserted.
|
webpage update watch - used to watch event update on Jun 11, 2005 at 21:25 UTC | by Qiang |
ever read something like 'come back in few days to check ...' on webpage ? I certainly do. how do u get informed when event get posted?
I write this script to watch event update on certain webpage. it keeps the page's last update time in a plain file and comparing it with the page's current update time.
if the time doesn't match, send an email to informe me the update.
the hash to store the page info maybe little lame and can be factored into a config file with the help of Config::Tiny or others.
I set up this script as a cron job and run it daily. I have never missed the event i am interested.
this script probably only work on static webpage. |
unzip on May 14, 2005 at 23:33 UTC | by polettix |
A little utility which includes some options from Info-ZIP's unzip program (available at http://www.info-zip.org/pub/infozip/). Help message:
Usage ../unzip.pl [-l|-p] [-q] [-x xlist] [-d exdir] file[.zip] [list]
Default action is to extract files in list, except those in xlist, t
+o exdir.
If list is not provided, all files are extracted, except those in xl
+ist.
Extraction re-creates subdirectories, except when exdir is provided.
-d extract to provided directory, no directory structure.
-h this help message
-l list files (short format)
-p extract files to stdout, no messages
-q quiet mode, no messages
-x exclude files that follow in xlist, comma-separated (Note 1)
Note 1: files with commas aren't allowed yet :)
The utility is primarily intended as a quick replacement for unzip on systems where this utility isn't available. I've implemented the options I use most, like seeing what's inside the file (-l option) and extracting to a directory without structure (-d option, even if I'm not really sure of this). I also find extraction to standard output quite useful some time to time, so I put it in (-p option).
As an added bonus, you can provide a list of files to extract (default is all files) and of files to avoid to extract (-x option). Testing will be implemented in the future, if I remember...
The command line differs from that of Info-ZIP unzip because the order for the options is different. Here I expect all options listed at the beginning, then the zip file name, then the names of the files to extract (if any). That's basically how Getopt::Std::getopts works, sorry for this.
See also Create/Extract Zip Archives from #include for a bidirectional utility (but with less options for unzipping). |
dar - pure perl directory archiver on May 06, 2005 at 19:45 UTC | by Ctrl-z |
Simple directory archiver akin to tar, but works on Win32. See pod. |
detacher.pl on Apr 29, 2005 at 05:02 UTC | by enemyofthestate |
Detaches itself from the terminal and starts a program. Orgianally written to start java code that refused to go into the background. Since then I've used it on perl 'daemons' as well. Based in a trick I learned way back in my OS-9 days. |
ppk on Apr 18, 2005 at 01:37 UTC | by northwind |
Perl Process Killer (PPK)
Usage: ppk {process name, required} {iterations, optional}
|
Remote killing of processes on Apr 08, 2005 at 11:47 UTC | by PhilHibbs |
Opens a telnet session, lists processes, and prompts the user to kill each one. The code is wrapped in my version of the bat2pl framework and is saved as a .cmd file, but anyone using this in a non-MSWindows environment can delete up to the #!perl line and also the last line of the script. It also uses Win32::Console to wrap long lines in a more readable manner, but those 4 lines can be replaced with either a hard-coded value (e.g. $w=80) or the equivalent console-width-determining logic for the platform.
If your implementation of ps formats its output differently then the hard-coded value 47 may need to be adjusted for more pleasant display (hanging indent).
Also uses Term::ReadKey which is not a standard distro module. |
ICQ group chat on Mar 14, 2005 at 17:03 UTC | by dpavlin |
This small script is for unlucky people (like me) who need group chat functionality for ICQ, but don't have client which supports it.
You will need separate account for this bot. !config command exists so that you can edit YAML configuration file (to kick somebody out) and reload config without restarting script.
Latest development version (with support for buddies, logging using DBI, !last and ton of other features) is
available from Subversion repository. |
VectorSpace on Feb 25, 2005 at 15:47 UTC | by smalhotra |
I needed something to iterate over every combination of a set of points, essentially to traverse a vector space. Since I couldn't find anything readily, I decided to roll my own. It's posted here in case anyone needs to do the same (or when I need to recall it). I didn't post the base class, Object but it is essentially soomething that supports new(), set(), and get() methods. |
Solaris - change hostname / ip / default-router-ip script on Feb 19, 2005 at 22:38 UTC | by Qiang |
I got bored when I had to change the ip/hostname from time to time on solaris 7 or 8 machines, there are too many files need to be changed!
you can change ip or hostname, or do both the same time. If the new ip is on different subnet from the old one. default router ip gets changed too (like the second example). two examples of running this script.
currently this script only prints out the command it is going to perform. To use it, comment out the following line in the script.
#print "\t $f changed\n" unless (system($cmd));
script -oldip [ip] -newip [ip] -oldhost [host] -newhost [host]
script -oldip 1.2.1.1 -newip 1.2.3.100
I wish i could make this script shorter :)
UPDATE: adds \Q \E and \b
also, don't trust user input (although in this case only myself) and validate it before processing. |
IIS 6.0 Export and Import to Back Server on Feb 10, 2005 at 23:39 UTC | by LostS |
Ever needed to get your IIS metabase copied to your DR server nightly? Ever just needed to do a quick export of your IIS settings and change some stuff and put it on a new/differant system. Well I finally found out how to do it on IIS 6.0 on a Windows 2003 server. So that is what this script does.
I hope you all enjoy. |
automatic-AutoBundle.pl on Jan 31, 2005 at 00:03 UTC | by hsinclai |
Run CPAN autobundle and send the Bundle to your archive directory, I always forget so now it's in cron.. |
Find children of process (unix/linux) on Jan 26, 2005 at 16:43 UTC | by Tanktalus |
Prints out all the child processes of a given process ID. The tough part is that ps is inconsistant across unix platforms. |
Indent Perl on Jan 17, 2005 at 07:33 UTC | by dr.p |
Takes messy perl code and makes it beautiful. Doesn't matter if former indentation existed, or was done with spaces or tabs. Indent character is a space and indent size is set to 2 by default. These are easy to change near the top of the script.
WARNING: Don't use on anything other than Perl code. Unawanted results will most likely occur.
|
File Statistics on Jan 11, 2005 at 13:16 UTC | by sweetblood |
fstat [-a] [-s] [-m] [-p] [-u] [-h] file ...
-a Print files access time.
-m Print files modify time.
-s Print files size in bytes.
-p Print files permissions(mode).
-u Print files owner user id.
-h Print help(this screen).
If no options are present on the command line all statistics will be returned.
|
xls2tab - Simple MS Excel to TSV converter on Jan 10, 2005 at 19:07 UTC | by legato |
Converts XLS data into TSV format, putting multiple sheets into separate files. Output files have a .tab extension. This is particularly useful for reading XLS files on non-MS platforms, and for bulk-loading data in XLS sheets into an RDMBS.
Tested on Windows 2000 and Linux.
Module requirements: |
wmchoose on Dec 27, 2004 at 21:58 UTC | by blazar |
Naive Window Manager selection tool
Data used by the program is appended to it after the __END__ token. Embedded pod documentation describes its format for maintainers/admins (no perl experience/knowledge required). This is being actually used at a site: it's rudimentary but it works! Users seem to be satisfied.
Note that die()s and warn()ings intended for the final users are in "\n"-form, whereas those for maintainers are in the somewhat more informative "\n"-less one, so before pointing out there's an inconsistency consider that it's there by design...
Suggestions and improvements welcome. About the only feature I've considered adding myself is support for multi-line text to be inserted in ~/.xinitrc, but up until now nobody has requested it at least in the environment this is being used.
|
Input highlighter / visual grep on Dec 17, 2004 at 02:40 UTC | by Aristotle |
Inspired by pudge over at use.perl, here's a short little script you can use to highlight pattern matches in input data.
usage: hl [ -c colour ] [ -x ] pattern [ file... ] [ < input ]
You can use capturing parens in your pattern. In that case, you can supply multiple attributes separated by commas, which will be used to individually colour the submatches.
-x will supress lines without matches.
Update: fixed massive offset calculation bug, hugely simplified the colourizing routine.
Due to the semantics of the @- and @+ arrays, my first stab was a horrible monster and incredibly difficult to debug, far harder to write than it promised to be. The special entries at index 0 indicating the start and end of the entire match required terrible contortions to take into account.
And, surprise surprise, the code was buggy.
In fixing my bug, I realized that the proper special case looked almost like a common case. And then I realized that by appending a phantom zero-length match and changing index 0 to instead signify a phantom zero-length 0th match, both special cases disappear.
Lesson: when implementing the semantics turns your brain to mush, change the semantics.
For a history of the code, look at aforementioned use.perl thread. |
Remove Duplicate Files on Oct 29, 2004 at 02:38 UTC | by jfroebe |
Searches a list of directories provided on the command line and removes duplicates. It remembers previous runs (compressed delimited file) and is able to remove 'cache' entries that point to nonexistant files.
A summary is printed
Loaded 93031 entries
TOTAL files: 93030
Added files: 0
Deleted files: 0
Files not found: 0
|
Bulk file attachment extractor for Lotus Domino on Oct 14, 2004 at 19:44 UTC | by diotalevi |
A simple extraction tool for Lotus Domino applications. |
Discussion Section / Office Hours Preferences Analizer on Sep 14, 2004 at 05:15 UTC | by hossman |
This script parses a data file containing students votes for when Discussion Sections (or office hours) should be held, and generates an Excel spreedsheet containing stats on how useful each trio of sections would be (along with a complete roster of everyone who's info is in the data file)
People who aren't enrolled or waitlisted are included in roster, but their prefrences are not counted.
See Also: the CGI to generate the DATA
|
norobotlog on Sep 06, 2004 at 01:42 UTC | by quartertone |
I always look at my Apache server log files from the command line. It always bothered me to see "GET /robots.txt" contaminating the logs. It was frustrating trying to visually determine which were crawlers and which were actual users. So I wrote this little utility, which filters out requests were made from IP addresses which grab "robots.txt". I suspect there are GUI log parsers that might provide the same functionality, but 1) i don't need something that heavy, 2) I like to code, 3) imageekwhaddyawant. |
submit-cpan-ratings - upload ratings to CPAN for stuff you've used on Aug 20, 2004 at 17:09 UTC | by diotalevi |
I am posting this script here to perhaps gather any feedback about the implementation or design before I submit this to CPAN.
submit-cpan-ratings is a script which automates the process of finding the modules you've used in your code and submitting module reviews to http://ratings.cpan.org. For example, to submit a review of the modules you used in your source directory: $ submit-cpan-ratings ~/src/a_script ~/src/a_directory ~/whateverYou'll be told which modules were found, and what the versions are. As each module is checked, http://search.cpan.org and http://ratings.cpan.org will be used to find the proper module name and version. If the module you used isn't on cpan under the name you called it or if the version you're using isn't available for rating you won't be able to submit a rating. This uses the same .pause file that the L<cpan-upload> script uses for
you PAUSE credentials.
|
vimod on Aug 12, 2004 at 21:07 UTC | by tye |
Use 'vi' as your 'pager' to view module source code.
This is a bit like "perldoc -m Module" then "v" to tell your pager (if it is smart enough) to throw you into your favorite editor (but less typing, supports multiple module names, leaves the paths to the modules displayed after you exit the editor, doesn't require your pager support "v", etc.)
Requires Algorithm::Loops.
(Updated to remove mispelt "pathes". Thanks, gaal.) |
File Chunkifier on Jun 15, 2004 at 14:09 UTC | by husker |
Splits a file into N evenly-sized chunks, or into chunks with at most N lines each. Works on Windows or UNIX. Allows header or footer text to be prepended/appended to each output file. |
Comment Stripper script for unix on Jun 14, 2004 at 01:55 UTC | by hsinclai |
e.pl
invoke as "e" or "ee"
Comment stripper for unix, useful during system administration. Removes blank lines, writes output file, strips "#" or ";". Tries to preserve shell scripts.
Please see the POD |
vimrc for documenting subs on May 19, 2004 at 13:50 UTC | by scain |
This is a vimrc that will automatically create documentation and skeleton code for a subroutine. To use it, type the name of the new subroutine in a line by itself in vim, the press either F2 for a generic sub or F3 for a Get/Set sub. The result for a generic sub looks like this:
=head2 generic_sub_name
+
=over
+
=item Usage
+
$obj->generic_sub_name()
+
=item Function
+
=item Returns
+
=item Arguments
+
=back
+
=cut
+
sub generic_sub_name {
my ($self, %argv) = @_;
+
+
}
and for a get/set sub, like this:
=head2 get_set_name
+
=over
+
=item Usage
+
$obj->get_set_name() #get existing value
$obj->get_set_name($newval) #set new value
+
=item Function
+
=item Returns
+
value of get_set_name (a scalar)
+
=item Arguments
+
new value of get_set_name (to set)
+
=back
+
=cut
+
sub get_set_name {
my $self = shift;
return $self->{'get_set_name'} = shift if defined(@_);
return $self->{'get_set_name'};
}
|
intelli-monitor.pl on May 06, 2004 at 18:22 UTC | by biosysadmin |
This is a quick program that I wrote for a friend who had some flakiness on his server. It parses the output of `ps ax`, checks for vital processes, and restarts any process that are stopped. |
stopwatch on May 06, 2004 at 16:25 UTC | by meonkeys |
Simple console-based stopwatch. Assumes 80-character wide terminal. Stop with Control-C. Requires Time::Duration. |
MakeAll.Pl on May 01, 2004 at 12:58 UTC | by demerphq |
NAME
MakeAll.pl - Build, test and optionally install a module on all versions of perl
located in path.
SYNOPSIS
MakeAll.pl [options] [file ...]
Options:
--help brief help message
--man full documentation
--verbose be talkative, repeat or assign a higher value for
+ more
--install do an install in each perl if build is good
--no-run don't run test, just scan path
--scan short for --verbose --no-run
DESCRIPTION
This program will run the standard
perl Makefile.PL
make
make test
and optionally
make install
for each distinct perl executable it finds in the path (it uses some tricks to make sure they aren't dupes). On Win32 it will use make or nmake as is necessary based on whether the script itself is run under Activestate perl or Cygwin perl. On non win32 machines it will use make.
AUTHOR AND COPYRIGHT
Copyright Yves Orton 2004. Released under the same terms as Perl itself. Please
see the Perl Artistic License distributed with Perl for details.
|
VarStructor 1.0 on Apr 30, 2004 at 19:58 UTC | by Wassercrats |
Alternative to Perl's reset function, with extra features. Also could be used to print variables and their values, including "my" variables. See top comments in script.
I'll probably add an option to exclude the "my" variables, and I intend to make this into a Cpan module (it's currently in subroutine form).
This is an improved version of VarStructor. |
Copy Permissions on Apr 25, 2004 at 19:25 UTC | by BuddhaNature |
Based on Ben Okopnik's cpmod script this script takes two directories (or files) and recursively sets the permissions of files/directories that exist in both to those of the version in the first. Handy if you want to check something out of a version control system and set the permissions to those of an already existant copy elsewhere.
NOTE: The full paths of both directories/files must be used, or ~/ if in your home directory.
UPDATED: Made use of japhy's suggestion.
UPDATED: Made use of some of davido's suggestions and those in his perlstyle piece.
UPDATED: Made use of another of davido's suggestions regarding checking the matches. |
mail-admin.pl on Apr 23, 2004 at 00:34 UTC | by biosysadmin |
I wrote mail-admin.pl this afternoon in order to manipulate my server's MySQL + Postfix virtual tables. It's a little rough at the moment, but I'm already planning a smoother version 0.2. :)
Information on the structure of the MySQL tables is available here, as well as details on the Postfix configuration that I used. |
VarStructor II -- Abbreviation tool on Apr 20, 2004 at 13:40 UTC | by Wassercrats |
Similar to Text::Abbrev (Text::Abbrev output example here). I intend to use it to create unique variable names based on the first few characters of the variables' values (the elements of @Lines). Variables that are still identical get numbered.
It's already set up as a demo, so try it out and see how it differs from Text::Abbrev and similar modules.
@Lines contains demo phrases to be abbreviated. The configuration options are $Max_Length (of returned phrases), $Letters_Only (1=yes, 0=no), and $Replacement (string that replaces non-letters).
The code is full of global variables, not enough comments, and probably other bad stuff, but I know that's what everyone loves about me. I intend to fix it up. Maybe. |
grepp -- Perl version of grep on Apr 15, 2004 at 01:45 UTC | by graff |
It's the old basic unix "grep" on steroids, wielding all the regex power, unicode support and extra tricks that Perl 5.8 makes possible, without losing any of grep's time-tested elegance. |
VarStructor on Apr 14, 2004 at 22:56 UTC | by Wassercrats |
This script is obsolete. See updated script at VarStructor 1.0.
An alternative to the eventually-to-be-deprecated reset function, plus lists variables and their values. Place VarStructor in the script containing the variables. Configurable with $CLEAR and $OMIT.
One advantage over some alternatives for listing variables is that variables in the code will be listed even if they weren't "seen" during run time. This includes variables within comments, though that part isn't an advantage (bottom of my list of things to fix).
Other limitations are it doesn't handle hashes or variables beginning with "_" and spacey values will be spacey looking (not visually delimited) in the output. I heard something about variable names containing a space not working too (whatever they are). I might fix all that, depending on the response I get. |
count tables in all databases of a mysql server on Apr 11, 2004 at 06:44 UTC | by meonkeys |
This will give counts for the number of tables in each database in a MySQL server as well as provide a total number of tables in all databases in a given server. |
automatically mount/umount USB mass storage device with hotplug on Apr 06, 2004 at 18:14 UTC | by meonkeys |
This script will allow you to plug in a USB Mass Storage Device (digital cameras, flash readers, etc) and have Linux automatically mount it, as well as unmount it on unplug. Someday Linux distros will do this for you (if they don't already), but for now, this may be handy. Requires 'hotplug' (for assignment of actions on device plug-in) and 'logger' (for writing status messages to the system log, probably /var/log/messages).
Before you ask questions, read this tutorial, which covers all prerequisites besides those I've already mentioned.
This script was designed for use with my Leica digital camera, so please excuse any hardcoded values including the string 'leica'.
|
apt-topten on Apr 05, 2004 at 21:32 UTC | by pboin |
Apt-topten will help bandwidth-challenged folks make good decisions on what packages would be good candidates to remove before doing an 'apt-get upgrade'. It lists the top packages that apt wants by size, descending. |
ncbi-fetch on Apr 05, 2004 at 13:48 UTC | by biosysadmin |
ncbi-fetch will fetch sequences from the NCBI using Bio::DB::GenBank Perl
module (available as part of the BioPerl package). Each sequence is saved to a
separate file named by accession number. This program will introduce a
three-second delay in between successive requests in order to avoid placing too
much stress on the NCBI servers.
|
repinactive on Mar 10, 2004 at 19:40 UTC | by sschneid |
repinactive prints a summary report of accounts deemed 'inactive'. |
jdiff - jar/java diff on Mar 04, 2004 at 21:51 UTC | by vladb |
Compares java classes in a set of jar files and prints out a straightforward inconsistency report. The report shows missing and modified files for each jar file. |
seq-convert on Feb 23, 2004 at 20:28 UTC | by biosysadmin |
A quick and dirty program that uses the BioPerl SeqIO modules to convert biological sequence data.
seq-convert [options] input-file
options:
--input <inputformat>
--output <outputformat>
--formats
--subseq <range>
--help
OPTIONS
--input
Specifies the format of the input file. Defaults to fasta.
--output
Specifies the output format. Defaults to fasta.
--print-formats
Prints the sequence file formats available to this program.
--subsequence range
Selects a subsequence of the sequence contained in the input
+ file. Ranges should have the form x-y, where x and y are positive in
+tegers.
--help
Prints a detailed help message.
--version
Prints version information.
|
helios - updated atchange on Feb 10, 2004 at 21:42 UTC | by biosysadmin |
Helios will monitor a file (or set of files) for changes to their modification times, and execute a command (or list of commands) after the modification time is updated. Inspired by the atchange program available from http://www.lecb.ncifcrf.gov/~toms/atchange.html. I use this program in order to save myself some keystrokes while compiling java files:
$ helios --verbose StringSort.java 'javac *.java'
Watching StringSort.java, initial mtime is 1076448240
StringSort.java last modified at 1076524024.
Executing javac *.java ... done.
Or, to save myself even more keystrokes:
$ helios --multiple-cmds StringSort.java 'javac *.java','java StringSort'
It's fun to watch the output scroll by after just saving my file. :)
|
Crosslink on Feb 02, 2004 at 08:40 UTC | by BrentDax |
For a Web project using Embperl, I have to show a small, fixed set of pages with several different templates (base.epl scripts, essentially, although I used a different extension). I chose to do this by creating hard links between the content files in each directory.
crosslink.pl (or whatever you choose to call it) takes a source directory, a destination directory, and a list of files. It then runs link("sourcedir/filename", "destdir/filename") (or symlink) on each file. This allows me to do things like:
crosslink template1 template2 index.epl bio.epl links.epl images/picture.jpg |
Build Bundle releases on Jan 29, 2004 at 18:24 UTC | by gmpassos |
This script will build a release of a module and all it's dependencies. (Read the script comments for instructions). |
View last login times of everyone on your system on Jan 15, 2004 at 17:42 UTC | by merlyn |
View the last login times of everyone on your system. You may need to adjust the struct for unpacking or the location of your lastlog file. |
PDF Concatenation and Extraction Tool on Jan 14, 2004 at 21:59 UTC | by rob_au |
This is a PDF concatenation tool designed to merge PDF files or portions thereof together to a single output PDF file. The command line arguments for this tool take the form:
pdfcat.perl [input files ...] [options] [output file]
-
-i|--input [filename]
- Specify an input file for concatenation into the output file. If a single file is specified with the --page parameter, this script can also be used for extracting specific page ranges.
-o|--output [filename]
- Specify the output file for concatenated PDF output.
-p|--page|--pages
- This argument, which follows an input file argument, defines the pages to be extracted for concatenation from a given input file. If this argument is not defined, all pages from the input file are concatenated. The pages specified for extraction may be separated by commas or designed by ranges.
For example, the arguments --input input.pdf --pages 1,4-6 would result in pages 1, 4, 5 and 6 inclusively being extracted for concatenation.
|
Which version is this module? on Jan 14, 2004 at 16:19 UTC | by bart |
Sometimes you wonder what version of a module you have, and it's easy enough to come up with a oneliner to do it, loading the module and printing out the value in its $VERSION package variable. However, life can be somewhat easier still, and so I wrapped this oneliner into a .bat batch file (for Windows/DOS). So copy this one line, paste it into a new text file "version.bat" (avoid adding newlines if you can), and save somewhere in your path — I saved it in my perl/bin directory, next to the perl executable.
Use it from the "DOS" shell as:
version CGI
or
version DBD::mysql |
URL monitoring (on ur LAN or on the internet vie a proxy server) on Jan 06, 2004 at 12:34 UTC | by chimni |
This script is used to check wether a url is availaible or not .
Perl's LWP module has been used
The script has functionality for direct access as well as authenticated access through a proxy server.
The HTTP get returns a status 200 if everything is ok .
THE TIMEOUT VALUE HAS BEEN SET AS 60 SECONDS IF URL DOES NOT RESPOND WITHIN THAT TIME IT WILL BE CONSIDERED A FAILURE
Pardon the lack of POD,used a general commenting style.
|
Self Extracting Perl Archive - SIP v0.45 on Jan 01, 2004 at 12:49 UTC | by #include |
This is a complete rewrite of Self Extracting Perl Archive, turning it into a complete, usable utility named SIP (Sip Isn't Par).
The latest version will always be avaliable from the SIP Website.
Takes a list of files and converts them into a Perl script that extracts the files when ran, with the ability to display text or files, check file integrity with MD5 hashes, run commands before and after extraction, and create 'silent' extractors. SIP's main purpose is NOT to be an alternative to par, but to be the basis for a Perl software installation system. Uses File::Basename, Digest::MD5, File::Find, File::Path, and Getopt::Mixed.
Usage:
sip.pl OPTIONS file,file,...
Options:
-v,--version Print version and exit
-h,--help Print this text
-n,--nobanner Do not print banner in output script
-N,--name text Changes the name displayed in the banner
-s,--silent Output script executes silently
-p,--text text Print text at beginning of output script
-P,--print filename Prints the contents of a text file at beginning of output script
-m,--md5 Verify file integrity. Output script will require Digest::MD5
-f,--force Force extraction of damaged files
-o,--output filename Write output to file
-w,--overwrite Automatically overwrite existing files with output
-r,--run command Execute a command after extraction
-R,--pre command Execute a command before extraction
-d,--dir path Add all files in directory to script
-D,--recursive path Add all files in directory (recursive) to script. Directory structure is recreated on extraction.
-t,--temp Extract all files to temp directory
-l,--location directory Extract files to the specified directory
-b,--noshebang Do not add shebang to output
-B,--shebang text Adds a shebang other than #!/usr/bin/perl
UPDATE: Added the -r option, so you can use SEPA scripts as installers.
UPDATE: Added the -R and -f options, making using SEPA as an installer system more viable.
UPDATE: Added the -d and -D options, making it possible to add all files in a directory to a script
UPDATE: Added the -t option, allowing for temp directory extraction, and changed the name of the program to reflect on some comments :-)
UPDATE: Fixed a bug with the -r and -i options together
UPDATE: Fixed a bug with the -D option, and added the -e option
UPDATE: Fixed a whole slew of bugs and added the -l option. If the -D option is used, the directory structure is recreated upon extraction.
UPDATE: Even more bugfixes and options. Added the -b,-B,-w, and -N options, allowing you to remove the shebang from the output script, add a different shebang, automatically overwrite files on output, or change the name displayed in the banner, respectively.
UPDATE: More bugfixes and I added POD documentation.
|
filemod on Dec 31, 2003 at 20:10 UTC | by neuroball |
Small script that recursively searches the specified or current working directory for files with a modified timestamp in a specified date/time range.
Update: Detabed source code.
|
Self-Extracting Perl Archive on Dec 29, 2003 at 00:40 UTC | by #include |
Takes a list of files and converts them into a Perl script that extracts the files when ran. Uses File::Basename and Digest::MD5. The script is printed to STDOUT, and requires nothing but Perl and Digest::MD5.
SUPER MAJOR UPDATE: Removed the GUI, and added file integrity checking with MD5 hashes.
|
mvre - MoVe files based on given Regular Expressions on Nov 21, 2003 at 03:44 UTC | by parv |
UPDATE, Nov 24 2003: Completely updated the code as of version 0.41, fixes
check for existing file w/ few new options/features.
This modules requires
Parv::Util,
that i use quite often. Most current version of "mvre" is located at
http://www103.pair.com/parv/comp/src/perl/mvre.
Back to mvre; pod2html has this to say...
In default mode, mvre moves file located in the current
directory, matching (Perl) regex, to directory specified by
-out-dir. An output directory is required in all cases. See
-out-dir and -out-find-path options.
Files to be moved, also referred as ``input files'', are specified either
via -in-select option and/or as argument(s) given
after all the options.
Order of input selection regex, -in-select or -select
option, and output directory specified is important. One-to-one relation
exists between each regex to select input files and the output directory,
either explicitly specified or found in path specified via
-out-file-path option.
|
onchange - a script to run a command when a file changes on Nov 20, 2003 at 20:19 UTC | by samtregar |
I like to write articles in POD and preview the results of running pod2html in my web browser. However, this requires me to run a command in a shell everytime I make an edit. Even if it's just 'make' I still can't be bothered.
So I wrote this script to run a command everytime a file changes. It requires Time::HiRes and Getopt::Long. Read the POD for more information.
UPDATE: I added code to do a recursive walk when onchange is passed a directory. This enables onchange to watch a whole directory tree. It won't notice new files added though... (Feb 24th 2005) |
read/write tester on Nov 19, 2003 at 12:20 UTC | by bronto |
There were problems with an Oracle database writing data over NFS here. A script was needed to do some random read/write parallel activity and check where things were going wrong.
If you call this script as
./dbcreate.pl 15 8k 90 1m 100m test.txt
it will spawn 15 children, read/write random data in 8kb blocks, doing 90% of input and 10% of output; it will initialize the test file to 1Mb size, will read/write 100Mb of data; the test file will be test.txt
I hope you'll find it useful
--bronto |
rename 0.3 - now with two extra cupholders on Nov 01, 2003 at 15:43 UTC | by Aristotle |
Update: originally forgot to include shell metacharacter cleansing in --sanitize.
This is a much improved version of the script I posted at rename 0.2 - an improved version of the script which comes with Perl, which in turn is an evolution of the script that comes with Perl. (PodMaster informs me that it's only included in ActivePerl. I haven't looked.)
It now does everything I could personally ask for in such a script; thanks particularly to graff and sauoq for feedback and food for thought.
I also stole a few options from Peder Strey's rename on CPAN. That one has additional options for finely grained control over keeping backups of the files under their old names; personally, I don't see the merit. If you do, please let me know. In either case, even if anyone thinks such facilities would be good to have, I feel they should be provided by a more general mechanism. After all, this is a script you can pass Perl code to; while there's good reason to optimize for the common case, I feel it is better to leave the specialised cases to the expressive prowess of Perl rather than try to invent a narrowly defined interface for them.
Blue-sky stuff: just yesterday I also decided this is basically a perfect vehicle to build a batch MP3 processor onto. Now I plan to eventually add facilities for querying as well manipulating the ID3 tags in MP3 files alongside their filenames. Given a cleanly integrated interface, this script would naturally lend itself to that task and become a MP3 renamer to end all MP3 renamers - without even focussing on that task. Of course all MP3 processing stuff would be optional and its main purpose would still be plain old renaming of files.
Anyway, without further ado, have at it. Please give this a thorough whirl and let me know of any kinks. |
DBI SQL Query tool on Oct 27, 2003 at 21:15 UTC | by runrig |
I use this to execute sql from Vim. Written because all the query tools available to me on the PC suck, so it somewhat emulates a query tool on unix that I like, which displays the selected fields across the page in columns if they will fit on the screen, but displays them vertically down the screen if they won't fit. I also display database column types, because I often see column names such as 'item_no', and there are often mostly just numbers in the column, but I'd like to know if its really a character column.
In this script, I assume all connections are through ODBC (easy enough to change that though), and if you are wondering what all the logic is about with the dsn and dbname, it is because in my version, I do alot of convoluted mapping of database names to user/passwords and where to find dsn-less connection strings. Creating a dsn-less connection is easy, you just create a file dsn to the type of database you need, then use the contents of that file as a template for the dsn variable, substituting the desired database name if needed. This uses code from WxPerl Login Dialog, but with my default db/user/password/dsn mappings, I rarely call that module.
This behaves a bit odd on SQL Server databases, for instance, it thinks update statements are really select statements, and a couple of rows are fetched (and returned!) from the database. I'm not sure if anything can be done about this short of scanning the sql statement beforehand, but I get amused every time it happens, so I leave it as is :-)...update: actually, it only seems to be on certain update statements in one particular database...weird
Enjoy.
Updated 2004-03-30 |
rename 0.2 - an improved version of the script which comes with Perl on Oct 25, 2003 at 22:14 UTC | by Aristotle |
Update: obsolete, please check rename 0.3 - now with two extra cupholders instead.
You probably know the script that comes with Perl. Initially, I started hacking on it because I didn't want to pull out the old rename binary for very simple substitutions, but found it too cumbersome to write a Perl s/// for the same job. Then, feeping creaturism set in and I started adding more and more little stuff.. eventually, it grew to something I wouldn't want to miss from life on the command line. |
cpandiff - diff local source against CPAN on Oct 24, 2003 at 05:47 UTC | by diotalevi |
This compares a local module distribution against the current CPAN version and produces a unified, recursive diff. |
Idealized optrees from B::Concise on Oct 21, 2003 at 21:50 UTC | by diotalevi |
This alters the output of B::Concise so you can view an idealized optree. It remove all the execution order, context and null nodes away until the output is nicely readable. perl -MO=Concise -e '...' | ./idealized_ops That transforms something like this: l <@> leavet1 vKP/REFC ->(end)
1 <0> enter ->2
2 <;> nextstate(main 2 -e:1) v ->3
k <2> leaveloop vK/2 ->l
7 <{> enteriter(next->g last->k redo->8) lKS ->i
- <0> ex-pushmark s ->3
- <1> ex-list lK ->6
3 <0> pushmark s ->4
4 <$> const(IV 1) s ->5
5 <$> const(IV 100) s ->6
6 <$> gv(*_) s ->7
- <1> null vK/1 ->k
j <|> and(other->8) vK/1 ->k
i <0> iter s ->j
- <@> lineseq vK ->-
8 <;> nextstate(main 1 -e:1) v ->9
- <1> null vK/1 ->g
c <|> and(other->d) vK/1 ->g
b <2> lt sK/2 ->c
- <1> ex-rv2sv sK/1 ->a
9 <$> gvsv(*_) s ->a
a <$> const(IV 50) s ->b
f <@> print vK ->g
d <0> pushmark s ->e
- <1> ex-rv2sv sK/1 ->f
e <$> gvsv(*_) s ->f
g <0> unstack v ->h
h <;> nextstate(main 2 -e:1) v ->i Into this: leave
enter
nextstate
leaveloop
enteriter
pushmark
const
const
gv
null
and
iter
lineseq
nextstate
null
and
lt
gvsv
const
print
pushmark
gvsv
unstack
nextstate |
Snapshot.pm on Oct 19, 2003 at 11:53 UTC | by robartes |
This module implements a way of taking directory structure snapshots using the rsync/hardlink method from Hack #74 in Linux Server Hacks. It's fairly basic for the moment, and limited to Unix platforms. Future versions will become more universal through the use of link and File::Find. |
pqset on Sep 30, 2003 at 15:14 UTC | by neilwatson |
pqset enables you to check and set user disk quotas on a linux system.
03/10/02 updated code with cleaner option/switches. |
Deleting a Subtree from LDAP on Sep 17, 2003 at 20:39 UTC | by mayaTheCat |
hi monks,
the following code recursively deletes a subtree from LDAP;
since, only leaf nodes can be deleted from ldap, this code, firstly, traverses the subtree, and then deletes the node in the reverse order it has been traversed.
if the port is other than the default port (389), it can be appended at the back of the server string, delimited with a ':';
e.g. if the server is ldapserver and the port is 889, then the following string works: 'ldapserver:889'.
if we do not want to stress the server, we can periodically
pause the deletion for a while through the parameters $sleepPeriod and sleepDuration
|
css: Counter Strike Scanner on Aug 10, 2003 at 03:09 UTC | by skyknight |
I enjoy a good game of Counter Strike just as much as the next nerd, but I hate the interfaces for getting into a game. There are a small number of servers on which I like to play, and if there isn't a good game on any of them, I'd rather not waste my time at all. As such, I want a quick way to know if there are some good games available Half-Life itself is too heavy weight in its startup. GameSpy is also slow to start up, and furthermore extrememly annoying as you are forced to watch all kinds of crappy advertising. What I wanted was a simple command line script that quickly apprises me of what's up on my favorite servers, and so I wrote this. You can simply specify a server address and port on the command line, or use a -f switch and a name for a file that contains a bunch of lines, on each of which is a server address and port, separated by one or more spaces. Thus usage is very straightforward. The output is nicely formatted as well. As one minor tweak, server names have spaces translated to underscores, so as to allow for distinct space separated fields for passing off to other command line scripts for further processing, say, if you wanted to sort based on ping time. Enjoy! Oh yeah... Half-Life doesn't use a nice text based protocol. You get to wrangle with messy pack and unpack statements. Yum. |
pmdesc2 - lists modules with description on Aug 05, 2003 at 22:23 UTC | by Aristotle |
I recently looked at Tom Christiansen's scripts. One of them, called pmdesc, lists any or a subset of the modules you have installed, complete with the version number and a description. Handy! Unfortunately, it has several annoying traits. First of all, it's slow. Which one might live with, except it also picks up modules relative to wrong directories, so Foo::Bar might be reported, f.ex, as i686-linux::Foo::Bar.
No problem, I thought, I'll just hack it. Unfortunately, the source is, well, less than tidy, with several unnecessary-to-be globals and badly distributed responsibilities. For so little code, it is suprisingly confusing to follow.
So what's a hacker to do, eh? Here's a clean version.
I fixed the directory problem by visiting the longest paths first, which ensures we see any subdirectories prior to their ancestors while traversing the trees.
Speed was addressed by using an ExtUtils::MakeMaker utility function. While this imposes restrictions on the $VERSION assignments this script can cope with, CPAN uses the same function, so anything from CPAN is likely to comply anyway. Compared to the old code which had to actually compile each module, this is orders of magnitude faster. |
Let the staff back in! on Aug 01, 2003 at 09:45 UTC | by Intrepid |
rectify-perms-perl-sitedirs
A *NIX Perl installation administration / maintainance utility.
This script is a specific instance of the "recurse and fix permissions / file ownership"
-type that has been written (doubtless) many times over by *nix system admins around
the globe. As such it isn't very special but might still be particularly welcomed by
newbies and those not used to quickly writing scripts for such admin tasks.
Please do not attempt to read this explanation if you already have started to
get a headache or tend to get them. It will most likely make it worse. ;-)
|
vimDebug on Jul 26, 2003 at 18:45 UTC | by toiletmonster |
an integrated debugger for vim. now you can step through your perl code inside vim. also supports jdb (java), gdb, and pdb (python).
get the latest version at http://vim.sf.net/scripts/script.php?script_id=663 or
http://iijo.org
comments, bugs, suggestions, please!!
|
(code) Command Antivirus Update Fetch+Extract on Jul 23, 2003 at 14:02 UTC | by ybiC |
Checks Command Antivirus website for updated virus definition file, and fetches if newer than local copy. Then inflates the self-extracting executable to an 'ms patch' file named for it's release date. This msp file is copied to latest.msp which can be run (outside this program) to update client PCs.
From a perlish perspective, building this has been a great refresher, especially on minimizing the number of global variables, as I've been Away From Perl for months. It's good to be back, even for code as straightforward as this. 8^)
Sample run:
http://download.commandcom.com/CSAV/deffiles/DEFMSP.EXE: 200 OK
PKSFX(R) Version 2.50 FAST! Self Extract Utility for Windows 95/NT 4-15-1998
Extracting files from .ZIP: c:/DEFMSP.EXE
Inflating: 030717.msp
1 valid msp file(s):
030717.msp
Latest msp: 030717.msp
Today: 030723
030717.msp copied to current.msp
Done... <enter> to exit
|
uniq_exec.pl - filters duplicate program output on Jul 18, 2003 at 20:12 UTC | by diotalevi |
This is sort of like uniq for programs - the script only prints your program's output when it is different than the last time. I wrote this so I could put `whois -i somedomain.org` in a cron job and only get notices when Network Solutions' database changes. You could use this with lynx or wget to see when a web page changes, etc. I assume this already some common unix tool so please let me know which one that is, I only wrote this because I didn't know what the tool was called. |
Yet another personal backup script on Jul 10, 2003 at 12:55 UTC | by EdwardG |
Yet another personal backup script. |
Arquivo de Log on Jul 08, 2003 at 18:48 UTC | by Mago |
Para os membros ou futuros membros da lingua portuguesa.
Esta funcções servem para que seja possível montar um arquivo de Log ou Trace de forma fácil.
Algumas alterações podem ser necessárias, para que seja atendido a necessidade do programa.
Você deve sempre abrir o arquivo de log.
Com o arquivo de log aberto vc poderá utilizar o envio de mesnsagem e código de erro para o controle do arquivo.
Sempre antes de finalizar o programa, você deve passar um código de retorno para que o arquivo seja fechado corretamente.
Ps. Lembre-se que este código é uma referencia para facilitar o desenvolvimento do programa.
|
Perl script commenting assistant on Jul 06, 2003 at 20:13 UTC | by Wafel |
Seeks out interesting tidbits of code and interactively assists in commenting them. The code is ugly though it might come in hande for commenting those old scripts that you've got lying around.. |
Value Restoration on Jun 27, 2003 at 13:13 UTC | by Lhamo_rin |
Thanks to the help I received from my fellow monks I was able to finish this script. I use it to restore certain values after a power failure that would, by default, be reset to 0. The user is prompted with the option of restoring the old values, overwriting with new, or canceling altogether. The read_file is just variable names and values separated by whitespace. I though someone might be able to use it. |
SXW Writer on Jun 13, 2003 at 14:56 UTC | by benn |
I needed something to write out Open Office .sxw files, and was suprised to find nothing on CPAN, so I spent a couple of days knocking this up. (Typical huh? Just doing it in Open Office would take me 5 minutes :)).
It's very raw, but it works, and may serve somebody as a starting off point for something else. All the documentation is in POD at the bottom. Enjoy.
Cheers,Ben.
Update added add_font example to synopsis |
Create/Extract Zip Archives on Jun 03, 2003 at 04:22 UTC | by #include |
Using this script, you can create and extract zip archives. Not only is this script functional as a utility, it also demonstrates usage of the Archive::Zip module. |
peek - output a one-line preview of stdin on Apr 27, 2003 at 17:36 UTC | by halley |
Some commands give visual feedback that is either feast or flood. They either remain blissfully silent while they work on a long task, or they blather endlessly about every step of their progress.
Many times, it'd be handy to have feedback somewhere in between. You can still see it's running, but it doesn't scroll your terminal out of sight.
This is essentially a Perl one-liner, written to a script. It's trivial. It's not about how tough the task is, but whether you find it useful.
Update: printf and --keep suggestions implemented |
Find Scripts and Make Executable on Apr 22, 2003 at 04:16 UTC | by The Mad Hatter |
The other day I had a large number of Perl and shell scripts that were not marked executable. They needed to be, but there was a great deal of them scattered all over the place (a source tree directory, to be precise) and I wasn't about to find and chmod them myself. This is the result.
It will check the first line of a file to see if it starts with a shebang (#!), and if it does, will make it executable for the user who owns it. I shell out to the system chmod because it was simpler and quicker to use the symbolic permissions notation (u+x) instead of stat-ing the file, adding the correct value to the octal permissions, and then using Perl's chmod with that modified value.
File names to check are specified as arguments. I used the script in combination with find: find . -type f -exec isscript \{\} \; |
New switches for perl(1) on Apr 21, 2003 at 13:06 UTC | by Aristotle |
If, like me, the vast majority of oneliners you write are -n or -p ones, you'll probably have cursed at the verbosity and unwieldiness of the -e'BEGIN { $foo } s/bar/baz; END { $quux }' construct.
Hey, I thought, I can do better than that.
So I ripped apart the Getopt::Std code and based this script on it, which adds two options to Perl:
- -B
- This works just like -e, except it also wraps the code in a BEGIN block.
- -E
- This also works like -e, except it wraps the code in a END block.
Enjoy.
Update: changed hardcoded location of Perl binary to $^X in last line as per bart's suggestion. |
unhead and untail on Apr 11, 2003 at 22:14 UTC | by halley |
The historically common head(1) and tail(1) commands are for keeping the head or tail of stream input, usually by a count of lines. This pair of scripts differ in three respects:
- these scripts don't show the head or tail, they show everything except the head or tail,
- these scripts don't count lines but instead work on a single regex (regular expression) to find a matching "cut here" point in the text,
- these scripts can actually modify/edit text files in place if filenames are given.
Check out the pod output for a complete man-page.
|
Anonymous User Add For Linux Shell on Mar 29, 2003 at 03:08 UTC | by lacertus |
I am 'presiding madman' of a Chicago based LUG, and I thought it might be apropos to allow users who've no experience with a *nix shell to be able to create their own account on the public webserver. Essentially, I have created a password protect 'newuser' account, whose information I give out upon a member's registration so they can log in. The password file has no 'shell' per se for this 'newuser' accnt; rather, this script is the shell. Of course, you must be quite careful with something like this, and while I have addressed all the security issues that come to mind, I'm sure this script isn't vulnerability free. Feel free to contact me with questions/suggestions/patches (if yer real cool ;)
The script allows for a newuser to create a username and assign a password of their choice; what's more, it logs all newusers, emails, etc, and also emails all this info to the administrator, so you can keep appraised of what's going down. Enjoy!
Ciao for Now,
Lacertus |
[vt.ban] simple bannerscanner on Mar 18, 2003 at 10:25 UTC | by photon |
this is a little bannerscanner which can send specific strings to different ports and dump the output..
very uggly code.. you can use it i.e. to check the versions of different network services.. |
Ascii Chart on Feb 24, 2003 at 23:04 UTC | by runrig |
Display an ascii chart similar to the ascii man page, in a variety of formats. |
nessus port reporter on Jan 25, 2003 at 01:27 UTC | by semio |
I've had the requirement lately to work with a large amount of security data in the NessusWX export file (.enx) format. This script will associate all ports with their respective IP in a ordered, grepable format. Hopefully someone will find it to be useful. Comments are very welcome. |
podwatch: POD previewing tool on Jan 17, 2003 at 23:21 UTC | by adrianh |
When I started writing POD documents I got bored with previewing things with perldoc. Every time I made an edit I had to start up perldoc again, find where I had made the change, only to discover it wasn't quite right. Repeat until done.
To make this a little easier I threw together podwatch. A (very basic) POD viewer that tracks changes to the source file. When the source changes podwatch reads it in again, and moves to the place where the output changed.
Now I edit POD in one window, with podwatch running in another. Instant feedback whenever I hit save.
Hope you find it useful. |
Splitting up XChat log files on Jan 12, 2003 at 06:54 UTC | by Paladin |
Script to split up XChat log files based on the time and date of the conversation. |
Url2Link 0.1 GUI/TK on Jan 12, 2003 at 05:12 UTC | by m_dv |
Simple PERL/TK program that reads from a textbox input lines that have a url at the begging and converts the urls into actual link files (with the name of url's domain). I did it because I had a file with a bunch of urls that I wanted to put in my "Favorite" folder. Any comments are welcome. |
change/limit already existing file names to [-_.0-9a-zA-Z]+ on Jan 09, 2003 at 09:19 UTC | by parv |
sanefilename.perl changes characters in file names which are not
composed of '[-_.a-zA-Z0-9]' characters. and...
- all the characters not matching '[-_.a-zA-Z0-9]' are converted to '-'.
- '-_' or '_-' sequence is changed to single '-'.
- any sequence of '.-', '._', '-.', '_.' is changed to single '.'.
- multiple occurrences of [-_] are changed to one.
in case of surprise(s), refer to the source code. it is also avilable
from...
http://www103.pair.com/parv/comp/src/perl/sanename.perl
|
template on Jan 08, 2003 at 22:09 UTC | by JaWi |
This little script allows you to create skeleton files based on simple templates. It provides a simple form of variable expansion which can be used to automatically fill in certain parts of your (future) code.
For example, you can create a template to automatically generate C++ or Java classes. |
Ensure Zip files always unpack to a single subdirectory on Jan 07, 2003 at 08:28 UTC | by bbfu |
hardburn requested a utility that would ensure that Zip files did not "explode", or unpack files into the current directory, by extracting into a subdirectory if the Zip file was not already set up to do so. |
Interlaced duplicate file finder on Jan 06, 2003 at 21:18 UTC | by abell |
A script to find and remove duplicate files in one or more directory. It serves the same purpose as salvadors's module (see also File::Find::Duplicates), but it's more efficient when discriminating different files of the same size.*
The program gets a speed-up by reducing file reads to a minimum. In most cases, it only reads small chunks from unique files and only files with duplicates are read completely. Thus, it is particularly fit for big collections of audio files or images (shameless advertisement ;).*
* - added and revised explanation, inspired by merlyn's comment. |
breathe - beyond 'nice' on Jan 02, 2003 at 17:27 UTC | by tye |
Run code in the background even more nicely than is possible with 'nice'.
We have administrative tasks that run on the PerlMonks database server regularly. These make the site quite sluggish even if run via "nice -99". So I wrote this.
It runs a command nice'd but every few seconds it suspends the command for several seconds.
Updated.
|
Create Daily Backups of Scripts and Datafiles Inside a Web Site on Dec 29, 2002 at 08:14 UTC | by jpfarmer |
I work on a website working with several programmers, all vastly different styles of writing/testing code. Every so often, a script or data file will get clobbered and the most recent total-site backup may be several weeks old at best (mainly because the whole site backup is so large).
In response to that problem, I wrote this script to back up the files that would normally get changed during development. I run it nightly via a cron job.
This is my first code submission, so please give me any feedback you may have. |
(code) Cross-platform unlink all but $n newest $filespec in $dir on Nov 26, 2002 at 20:52 UTC | by ybiC |
Delete all but "n" newest files of given filespec from specified directory. Accepts filesystem wildcards like * and ? as filespec arguments. The code line that actually unlinks files is commented out - uncomment once you're comfortable with how options and arguments operate. Tested with Perl 5.6.1 on Debian 3, Win2kPro, WinNT plus Perl 5.8.0 on Cygwin.
It's entirely possible that this might be done in fewer LOC using File::Find. Nonetheless, has been a good exercise/refresher for /me on stat, sort, cmp, regexen, and glob.
Thanks to the following monks for direction, clues, and answers to brain-mushing questions: Petruchio, jkahn, Undermine, Zaxo, theorbtwo, fever, BrowserUk, tye, belg4mit, PodMaster, and Mr. Muskrat. And to some guy named vroom.
Update: see pod UPDATES
|
Remove eMpTy Directories on Nov 18, 2002 at 16:45 UTC | by tye |
Traverses a directory tree deleting any directories that are empty or only contain subdirectories that get deleted. Updated. |
oldfiles on Nov 14, 2002 at 19:18 UTC | by neilwatson |
Recover your disk space!
oldfiles searches a directory and sub directories for files of a certain age and size. A report is emailed to the owners of those files asking them to remove or archive them.
This is a first draft. Everyone is encouraged to make comments and suggestions.
Thank you. |
resub on Nov 08, 2002 at 09:02 UTC | by graff |
Do any number of global regex substitutions uniformly over
any number of text files, and correctly handle all character
encodings supported by Perl 5.8.0, with optional conversion
of data from one encoding to another. (update: fixed checks
for valid regexes) |
force.pl on Oct 26, 2002 at 21:48 UTC | by ixo111 |
perl translation of the old classic 'force.c', allowing
you to force input to a terminal device. even though
it saw a lot of illegitimate use in the past, i've found
quite a few legitimate uses for it over the years - the ioctl/fcntl defs are hard-coded at the top and may need to be changed for other systems (consult your ioctl.ph, ioctls.ph and fcntl.ph for the proper values - consult h2ph if you do not have these files in your perl tree)
|
robustly list any Perl code's module dependencies on Oct 06, 2002 at 11:39 UTC | by Aristotle |
I just read hans_moleman's script and thought there has to be a more robust way to do this that doesn't rely on parsing sources and has better support for recursive dependencies than just reporting whether a toplevel dependency is satisfied. This module is the surprisingly simple result. It relies on the fact that you can put coderefs in @INC.
Update: podmaster points out that not all pragmas are available everywhere, warnings being an obvious example, so they constitute a depency, too. I don't want to change the behaviour of this module however, therefor I added this to the CAVEATS section in the POD. |
Check a script's module dependencies on Oct 06, 2002 at 03:09 UTC | by hans_moleman |
I put together this code to help migrating Perl scripts from one environment to another. The scripts are mostly CGIs, and I found it annoying and time consuming to either - run the scripts to see what breaks
or
- read through each script manually
. Using this little script you can check whether all needed modules are available.
Comments and suggestions are appreciated as always...
Update: Changed the logic in my if statement to reflect podmaster's CB suggestion... |
Tk Quick Benchmark Tool on Sep 22, 2002 at 04:54 UTC | by hiseldl |
This script is a user interface for comparing short snippets of code and was inspired by BrowserUk, bronto, and Aristotle at this node. There are several other nodes that have discussions about performance and using the Benchmark module, this is just the last one I read that made me want to write this code.
There are 2 buttons, 4 text widgets in which to enter text, 1 text widget to show the output, and an adjuster.
- Clear Button - clears all the text boxes and reset's the count to 1000.
- Run Button - runs the tests using cmpthese from the Benchmark module. The output text widget will be cleared before the test is run, you can turn this off by commenting out the following line in the OnRun method:
$tk{output_text}->delete(0.1, 'end');
- count - this is the first argument to cmpthese.
- test1 - this is the first snippet to be tested. An example:
mapgen => 'my @ones = mapgen 1, 1000;'
- test2 - this is the second snippet to be tested. An example:
xgen => 'my @ones = xgen 1, 1000;'
- code - this is where any supporting code should be typed; this field is not required. An example:
sub mapgen { return map $_[0], (1..$_[1]); }
sub xgen { return ($_[0]) x $_[1]; }
- output - this is where the output from cmpthese will appear as well as the code that was eval'd. An example:
COUNT=1000
TEST CODE:
{mapgen => 'my @ones = mapgen 1, 1000;',
xgen => 'my @ones = xgen 1, 1000;',}
SUPPORT CODE:
sub mapgen { return map $_[0], (1..$_[1]); }
sub xgen { return ($_[0]) x $_[1]; }
RESULTS:
Benchmark: timing 1000 iterations of mapgen, xgen...
mapgen: 2 wallclock secs ( 2.14 usr + 0.00 sys =
2.14 CPU) @ 466.64/s (n=1000)
xgen: 1 wallclock secs ( 1.16 usr + 0.01 sys =
1.17 CPU) @ 853.24/s (n=1000)
Rate mapgen xgen
mapgen 467/s -- -45%
xgen 853/s 83% --
Happy Benchmarking!
-- hiseldl "Act better than you feel"
|
Storable 2 Text - An editor for data files created by Storable.pm on Aug 08, 2002 at 13:57 UTC | by kingman |
Opens a file created by Storable::lock_store and dumps the data structure to an ascii file for viewing/editing.
Also creates a file via Storable::lock_store that contains an empty hash if you create a symlink to the script called ts (touch store). |
Mirror only the installable parts of CPAN on Aug 08, 2002 at 06:10 UTC | by merlyn |
As noted in a parallel thread, I have this short program which can mirror a complete set of the installable modules for use with CPAN.pm.
This is for review purposes only. A final version of this code will appear in my LM column. Comments are welcome.
WARNING: |
As stated, this was a preliminary version of this program for comment only. While writing
the column, I fixed a few bugs.
Do not use the version here. Use the version there instead.
|
|
diffsquid - find the differences in Squid configuration files on Aug 07, 2002 at 15:06 UTC | by grinder |
Analyse two squid configuration files, and report parameters that are
present in one file but not in the other, or have different values.
Also attempt to identify valid parameter names in the comments and
report on those as well (useful when new versions are released). |
rcmd.pl on Jul 25, 2002 at 12:26 UTC | by greenFox |
rcmd.pl -a utility for running the same command across a group of hosts.
|
dgrep - Wrapper around gnu find & grep on Jul 22, 2002 at 20:10 UTC | by domm |
I just cannot remember how to run find and grep together. After reading the FM once too often, I wrote this small wrapper..
Pass it a pseudo-regex (to match the files) and another one to look for in all files.
Example:
% dgrep .pm foo
Will look for "foo" in all files ending in ".pm" in the current and lower directories.
Edited:
~Tue Jul 23 15:24:49 2002 (GMT),
by footpad: Added <code> tags to the code.
|
win2unix on Jul 20, 2002 at 08:03 UTC | by ackohno |
This is a little script i came up with to get those ^M's out of files that come in the downloaded source here at PerlMonks. Given one argument (a file name), the script removes the ^M's from that file; given two, the first is input and second is output. If the if statment matching for the perl shebang is removed, this script can be used to remove the ^M's from any file. Without that if statment, there may be a new line at the begining of the file witch will cause the script not to run.
|
CheckPoint rule auditor on Jul 12, 2002 at 04:08 UTC | by semio |
This script was designed to help me gain insight into rule utilization on the Check Point Firewalls I maintain e.g. rules most heavily used or, conversely, rules not being used at all. Its input is any semi-colon delimited file created using logexport on the Firewall. Works on 4.1 and NG |
backfiles on Jun 27, 2002 at 03:23 UTC | by xiphias |
Backs up files using tar into dirs in backlist.txt.
Quite simple |
MODULATOR on Jun 21, 2002 at 01:39 UTC | by epoptai |
Browse pod and code of installed perl modules in a handy frameset. Lists each installed perl module linked to an HTML rendering of its pod if any, and to its source code. Option to automatically put synopsis code into a form for easy testing via eval (this is both powerful and dangerous, use caution). Lists environment variables and result of various path and url finding methods. Here's a screenshot.
Updates:
fixed problem with "refresh cache" not refreshing the cache.
added "no header" option to code eval, for testing output of modules like GD.
implemented this fix suggested by perigeeV.
added a link to the perl module list.
added function to list module source code with numbered lines.
Added a CPAN search form. |
nnml2mbox on Jun 11, 2002 at 15:45 UTC | by mikeirw |
I needed to view the contents of a nnml mail directory, but didn't have
access to Emacs or Gnus, so I whipped up this simple script to allow me to
use mail -f instead. I must say that I'm a Perl newbie, so it may need some
work. If so, I'll appreciate any comments.
A quick note: I did not include any code to match Gcc'ed emails (which
doesn't generate a From header), so you may need to add that before
running. |
Statistical data analysis on Jun 05, 2002 at 12:56 UTC | by moxliukas |
This short program outputs some statistical analysis data given the input data in two tab separated columns, first one being X column, and the second one Y column. It calculates means, quartiles, median, variance and standard deviation for both sets of data. It also outputs various sumations (X, X^2, Y, Y^2 and X*Y). It then calculates covariance, linear correlaton coeficient and determinance and finally comes up with linear regression equation.
Most of this is simple and straightforward maths and I do hope it will prove useful to someone (well, I have used this script for my statistics lectures). |
Fake daemon on May 30, 2002 at 02:31 UTC | by hagus |
A script I dug out of my archives. I submit it here in the hope that someone might find some useful sample techniques, despite its hurried appearance. I wrote it awhile ago with the following goals:
To make a non-daemon process run as if it were a daemon (ie. give it a controlling terminal).
To collect the stderr and stdout streams from that process uninterleaved (is that a word?).
To restart the process at a particular time each day.
To restart the process should it die unexpectantly.
Things needing fixing that I can see:
Signal handling is below par. I don't understand it very well, as I seldom have to handle signals in perl.
Restart time is hardcoded - it really should take either a maximum run-time argument, or a date string which is parsed.
Command line arguments, anyone?
Handling infinite loops when restarting the process. Ie. if restart occurs more than x times in y seconds, sleep for z or exit.
Other stylistic or design problems people might see?
|
MySQL backup on May 24, 2002 at 23:25 UTC | by penguinfuz |
This script backs up each MySQL database into individual gzip'd files; Useful in shared environments where many users have their own MySQL databases and wish to have daily backups of their own data.
UPDATE: 17/07/2003
- Now using bzip2 for better compression
- Removed connect() subroutine
TODO
- Read db owners from a config file and automatically deliver
backups to the appropriate ~user dir.
|
Service Health Scanner on May 19, 2002 at 11:23 UTC | by penguinfuz |
Basically the script first tries to resolve the domain name, if it failes you get email notification telling you which domain could not be resolved. If the domain resolves ok, the domain is then scanned for whatever service is specified, if this fails you recieve a notification of which domain is having trouble.
The subject line of the email notifications have been tailored for output to a mobile telephone and/or pager.
UPDATE: Complete rewrite, still checking for DNS resolution before moving forward, but now using LWP::Simple to check web/SSL connectivity, and email notifications are more user-friendly in that the failed service and hostname is listed in the SOS.
|
NFS Watcher on May 16, 2002 at 13:13 UTC | by penguinfuz |
I wrote this script to address an issue I had with Apache authentication and multiple load-balanced web servers, and I run it from a cronjob every so often.
Basically I keep ONE copy of the relevent .htaccess files and NFS export their location to the other web servers in the cluster.
I am sure this script could be extended further, but I have not messed with it since it serves the basic purpose I started towards. I hope someone else will find this code useful as well. ;)
UPDATE:
Added "use strict;" and found a missing semicolon at: my $ip_add = "192.168.0.10"
And a global @mntargs; hanging around. |
gdynfont.pl - A Gimp plug_in on May 13, 2002 at 21:17 UTC | by simonflk |
This plugin will display information about a GDynText layer in the Gimp.
Background
I needed to find out the name of the font that I used in a .XCF a few months ago. I didn't have the font installed anymore, so GDynText selected the first font in the list. It may be that I have overlooked a simpler way of working out the font, nevertheless, this was a nice excursion into Gimp-Perl. Unless you are in a similar situation (no longer have a font, or someone gives you a Gimp file without associated fonts), this will be pretty useless because it doesn't display anything that you can't get from GDynText itself.
Installation
Copy this script into your ~/.gimp-1.2/plug-ins folder and make it executable.
Usage:
Select a GDynText layer and then select GDynFontInfo from the <Image>/Script-fu/Utils menu |
dfmon - Disk Free Monitor on May 09, 2002 at 12:04 UTC | by rob_au |
This little script is a rewrite of a very nasty bash script that for a long time formed a core crontab entry on systems which I administer. This code allows for the alerting of the system administrator via email when the disk space on any of the mounted partitions drops below a set threshold.
The corresponding template file used for the email sent to the system administrator may look similar to the following:
This is an automatically generated message.
The following file systems have reached a storage capacity greater
than the alert threshold set within the dfmon.perl administrative
script.
[% FOREACH fs = filesystems %]
[% FILTER format('%-10s') %][% fs.mountpoint %][% END -%]
[%- FILTER format('%15s') %][% fs.blocks_total %][% END -%]
[%- FILTER format('%10s') %][% fs.blocks_used %][% END -%]
[%- FILTER format('%10s') %][% fs.blocks_free %][% END -%]
[%- FILTER format('%5s%%') %][% fs.percent %][% END -%]
[%- END %]
--
|
VBA 2 Perl on May 08, 2002 at 18:37 UTC | by Mr. Muskrat |
vba2pl reads a VBA macro file and attempts to translate the contents to perl. It outputs to a file with the same path and base name but a .pl extension.
It's far from finished although it has come a long way since I started on it last night.
It got its start in my follow up to Win32 - M$ Outlook and Perl.
Mandatory "Dark Side" quotes...
"If you only knew the power of the Dark Side of the Force" - Darth Vader
"Once you start down the Dark Path, forever will it dominate your destiny, consume you it will..." - Yoda
|
SSManager on May 05, 2002 at 22:01 UTC | by Anonymous Monk |
This module wraps a number of SourceSafe OLE Server functions in one-step function calls. |
Duplicate file bounty hunter on Apr 24, 2002 at 00:42 UTC | by djw |
This will search out a directory recursively for duplicate files of a specified size (default: 100 MB). It logs everything and makes you dinner.
Enjoy,
djw
*update* 04.24.2002
After some suggestions by fellow monks, I have changed this to use Digest::MD5. Thanks everyone++.
djw |
Summarize Orange phone bill on Apr 03, 2002 at 09:21 UTC | by fundflow |
Parse the phone bill given by Orange cellphone company (www.orange.net)
It shows the information per phone number (number of calls and text messages, and the cost/minute). I use it to check which of my friends costs me the most :)
I'm now in the UK and this is the bill they give here, but this might work in other countries. In case there's a problem with other countries, let me know.
|
Oracle DB & Server Backup on Mar 29, 2002 at 02:42 UTC | by samgold |
Script to backup 4 Oracle databases and then backup the server using ufsdump. It backs up 2 databases a day and then the other 2 the next day. This was written for a Sun Box backing up to a DLT tape drive. Look for places where lines will need to be edited noted by #!! If you have questions or comments please let me know. |
Length on Mar 14, 2002 at 07:46 UTC | by Juerd |
This code is very simple, and I think every experienced Perl coder can think of it. However, I use this twice a day, so it's useful to me. It might help out others, or at least make some of you toss away those old while(1) { print length(<>) - 1, "\n" } scripts that some people have. |
tree.pl - kinda like tree on Mar 04, 2002 at 20:42 UTC | by crazyinsomniac |
heard of the Perl Power Tools, well this one was a missing one (tree). This version builds a LoL, with the first element in each list being the directory name. Output of the real tree utility looks like
F:\DEV\FILE_TREE\B
| file
|
\---c
| file
|
\---d
file
Improvements are welcome. |
(code) Ignore The Man Behind The system(rsync) Curtain on Mar 01, 2002 at 03:33 UTC | by ybiC |
Wrapper for rsync, intended for backing up data betwixt a client (Cygwin on Win32|Linux) and a server (Linux).
I looked into File::Rsync module, but it also employs 'exec' calls. So for now will stick with system call
to rsync, for simplicity and for standard-distribution-
modules only.
From a Perlish standpoint, this has been a refresher in the use of 'tee' (props to tye and Zaxo), another chance to use the nifty Getopt::Long and swell Pod::Usage modules, use timestamps for logfile names, detect OS type with $^O, use the keen-o filetest operators, sprintf for human-readable date+time, and to write another silly Perl script that's 50% pod.
As always, comments and criticism are wildly welcomed.
Update:
Experimenting with File::Rsync to ease parsing-on-Cygwin woes.
Add parsing code by Zaxo
Minor tweaks to pod
Present runtime in appropriate units (sec, min, hour...)
Handle backups of rsync modules *to* rsync server in addition to *from*
|
Searching for 'chunks' of data in very large files on Feb 28, 2002 at 18:21 UTC | by Ovid |
Recently, in the Perl beginners list, someone had a bit of a quandary. They were reading a 600 MB file and needed to find a search term, grab from the file 200 bytes of data both before and after this term and then search for another term within that 'chunk' of data.
I thought this was such a fun problem that I went ahead and wrote the program for this person (yeah, I know, I gave him a fish). This is deliberately overcommented in case the person did not know a lot of Perl. The basic idea is to search the file and return 400 byte 'chunks' in an array. |
Virus Tester on Feb 24, 2002 at 04:40 UTC | by ProgrammingAce |
This little perl script will see if your antivirus software is paying attention by writing a file called aseicar.com to your C: drive. This file contains a small string that is harmless. Your antivirus program should register the new file as the "Test-String Virus". I repeat, THIS FILE IS HARMLESS! |
flexible find on Jan 27, 2002 at 17:17 UTC | by axelrose |
Here is a little script I always wanted to have:
a "find" script which
- - runs on Macs (and *nix, Windows too)
- - creates objects while running
- - has an understandable "-prune" option
- - uses Perl regex for filtering
This should make it easy to extend the idea for your purpose.
The example below outputs found files sorted by modification time.
You can change it to list directories sorted by size or by mtime
by providing a directory callback function:
my $dirfunc = sub { push @dirs, File->new( name => $_[0] };
WalkTree::walktree( $mystartdir, undef, $dirfunc, undef );
I'm happy about your comments.
Best regards,
Axel.
|
mrtg.errorcap - reformat MRTG errors on Jan 25, 2002 at 16:36 UTC | by grinder |
When running MRTG from cron, if a device fails to reply within the allotted time, iit spits out a voluminous error message. If many devices are down (because a nearby gateway has fallen off the net), the exact nature of the problem can be difficult to see, as there is too much output to wade through.
This script takes the output, reformats and summarises it and sends it to the relevant authorities.
|
Regex Checker on Jan 08, 2002 at 18:45 UTC | by mrbbking |
2002-01-09 Update: Note to Posterity - there are better ways than this.
After posting this little thing (which truly was helpful for some),
two fine folks were kind enough to point out a
better way of understanding your regex and
a fine online reference - neither of which I had managed to locate on my own.
Many thanks to japhy for creating them and to tilly and crazyinsomniac for pointing them out.
If you're here looking to learn more about regular expressions, you'll do well to follow those links.
A new guy here was confused by the difference between
"capturing" with parens and "grouping" with brackets. I gave him this function to help show the difference, but it's useful for general testing of regular expressions.
The thing to remember is that it's easier to write a regex
that matches what you're looking for than it is to write
one that also doesn't match what you're not looking for.
Note: I did not use the 'quote regex' qr//; because that makes print display the regex in a way that differs from what the user typed. My goal here is clarity.
Further Reading (in increasing order of difficulty):
|
PingSweep on Jan 04, 2002 at 03:08 UTC | by sifukurt |
I've been working a lot with Perl/Tk recently (mostly just for fun), and I needed a script that would verify that a list of servers were active. So I combined the two things and ended up with PingSweep. It reads the servers out of an XML file (default name of the XML file is "hostdata.xml"), and the specifications for the XML file are included in the help text. It defaults to pinging the servers 20 times every 90 seconds, and sends an email to a specified address if any of the servers fail to respond. All of those options can be modified from the command line via Getopt::Long.
Hopefully you'll find it useful. As always, feedback is welcome. |
(Fake) CVS Import Utility on Dec 18, 2001 at 05:26 UTC | by vladb |
Does exactly what a 'cvs import' command would do with the only exception that a version of working source files will not be generated inside an active CVS repository. For that matter, it also doesn't require you to execute the 'cvs checkout' command to retreave source code from
the repository to start working on. This, in turn, means that you may start archiving versions of your current source files immediately after
running this script inside the directory containing those source files. |
Verify Your FreeBSD CD on Dec 10, 2001 at 14:36 UTC | by crazyinsomniac |
If you just downloaded 4.4-install.iso also known as FreeBSD 4.4 ISO, and you "burned" it to a cd, and you wish to make sure that all the packages contained are as they should be (Debian has been really lame about this, I had at least 5 packages fail, even tough the "burn" was good, it turns out, just a shabby iso release)
perl verify_freebsd.pl -d D:/>burnt
WHAT?!?!: For some reason, in \XF86336\CHECKSUM.MD5
the line
MD5 (Servers) = edc0aef739c1907144838e6c18587e02
which apparently is the md5 sum for the "directory" \XF86336\Servers which you might be able to do in *nix, but on on winblow. |
Verify Your Debian Potatoe on Dec 09, 2001 at 16:04 UTC | by crazyinsomniac |
All your smurf are to "Verify Your Debian Potatoe" after baking smurf .iso
perl verify_potatoe.pl -d D:/>burnt
update: if you run accross "d41d8cd98f00b204e9800998ecf8427e" you've got yourself an empty file. Beware, some cd-r's play mind games |
CSV Database Validation on Nov 19, 2001 at 09:20 UTC | by Ovid |
Recently, for a personal project, I was using DBD::CSV to create a simple database. I was dismayed how little validation there was. The following program will allow you to create a CSV database and validate unique fields, foreign key constraints, and simple data types. You can also use this to validate data against regular expressions. Naturally, these are one-time validations.
Also included is support for validating an existing CSV database or CSV file. See POD for complete details.
Please let me know if you find any bugs in this code. Also, a code review would be appreciated :)
|
logrotate on Nov 12, 2001 at 19:23 UTC | by enaco |
This is a script for rotating logs.
It reads a configuration file that tells it what files to rotate, how long the rotate history is and if it exists, what program to send a HUP when rotatings its log.
This script was made primarly for learning perl but still a usefull pice of work.
The script is not quite done yet, i still got some bugs to fix, i release it early and hope for some input. |
spew - print out random characters on Nov 12, 2001 at 19:19 UTC | by grinder |
Print out random characters, suitable for making hard-to-guess
password, or manual IPSec authentification keys (which is why
I wrote this in the first place). |
Obscure on Oct 25, 2001 at 20:14 UTC | by JimE |
While contrary to the spirit of Perl, the harsh realities of the world sometimes make it desireable for your code to be less than totally open. Although it gets discussed from time to time in various forums, I've never actually found a tool to do this, so I wrote a 'perl code obscurer' for my current need that might be of some use to others. Does not go as far as the encrypt/decrypt model proposed by some, just file munging and var renaming to produce a distributable file that the interpreter can run. Discourages tampering but won't stop a determined reverse engineerer. More details in the pod in the file... |
makeperl on Oct 17, 2001 at 21:47 UTC | by Rich36 |
A very simple script, but one that I use all the time. I find myself writing a lot of small Perl scripts and I find this to be a convenient method for getting started, saving a little time, and imposing a standard coding structure.
makeperl creates a new file, writes a standard format/template for a Perl script, changes the permissions to be executable, and opens the file in an editor.
I also use this a lot for when I'm trying out code examples from books, online, etc.
This has only been successfully tested on *nix, but should work elsewhere. |
snapdiff -- Compare CPAN autobundle files on Oct 16, 2001 at 23:03 UTC | by Fletch |
CPAN.pm can create a bundle file which contains all of the modules
which are currently installed on a box with its autobundle
command. This program will compare two such snapshot files
against each other, noting what modules are in the first but not in the second (and vice versa)
as well as if there are differing versions of the same module.
Handy if you're trying to duplicate the configuration of one
box on another, or want to see what's changed over time if
you keep historical bundles.
|
Dump.pl on Oct 03, 2001 at 06:42 UTC | by hsmyers |
Nothing fancy, just a straight forward binary dump of a file in a formatted display to either STDOUT or filename. Particularly handy to compare what you think is in the file with what is actually in the file! hsm |
Simple Directory Mirror on Aug 31, 2001 at 21:59 UTC | by sifukurt |
This is a quickie script I wrote to backup files in my work directory to a backup directory. I thought it was handy, so I thought I'd post it here. |
Linux message log webifier on Aug 31, 2001 at 08:33 UTC | by hans_moleman |
This is a script I threw together to provide a web version of the Linux message log, indexed by service name. I'm a new perl coder so comments and especially suggestions would be much appreciated...
NOTE: This code has been modified as a result of the helpful suggestions given to me... ;)
09/02/2001 - heavier modifications now, I have put the file write stuff into a sub where it belongs and added code to read the logfile both by service and by date. By the way, this script should work for any syslog type log, I have tried it on /var/log/secure and it works like a charm...
09/03/2001 - added variables for HTML colour settings, kind of a poor man's style sheet ;).
|
GetUsers on Aug 30, 2001 at 23:15 UTC | by Jerry |
Client app for my GiveUsers server. Currently set up to get the password and shadow files (gpg encrypted before crossing the network!!), and changing all shells to /bin/false (except the first 20 lines). Be sure to create GPG keys and import them to the other box (as the user which will be running the GetUsers and GiveUsers scripts) |
Regex Tester on Aug 21, 2001 at 20:28 UTC | by George_Sherston |
A cgi that lets you try out regular expressions and see the results in an ergonomic way. No rocket science, but a thing I, who am fairly new to regexes, have found very useful for learning how they work, and coming up with the right one to do what I want. Some people can work it all out in their heads - I find trial and error indispensable!
The code is below, in case anybody wants to put it on his or her own machine, or check to make sure it's doing what it says on the box. But if you just want to use it, then please feel free to click here
§ George Sherston
PS - Alright already, I know all my variables are global and I didn't use straitjacket ... it's For Home Use!
PPS - Having said that, I would, of course, welcome style comments and suggestions for how to make it slicker. Every day and in every way I am getting better and better. |
NotSoLong.pl on Aug 12, 2001 at 03:53 UTC | by ichimunki |
|
Code::Police on Aug 01, 2001 at 19:41 UTC | by Ovid |
This is the Code::Police module. Provide this module to programmers who fail to use strict and most of their coding errors will be instantly eliminated. |
cols2lines.pl on Jul 31, 2001 at 19:41 UTC | by tfrayner |
Time once again to reinvent the wheel, I suspect. I wrote this a couple of years back as an exercise in perl. Specifically, a friend of mine was wanting to manipulate large (>10GB) tables of data. Part of his analysis involved transposing a table such that the columns became lines and vice versa. As a result I wrote this rather convoluted script. It only loads one line of the table into memory at a time, rather than the whole table.
The other points of this exercise were to make the code as well-structured (whatever that means :-P) and as user-friendly as possible. I imagine it's possible to load the essential functionality into a single line, but that wasn't my aim here.
Of course, I imagine there's a perfectly good CPAN module out there which would do this much better than this :-)
Update: The original script opened and closed the input file with each column that it read. I've changed to the more efficient seek function suggested by RhetTbull. |
Perl Code Colorizer on Jul 31, 2001 at 11:29 UTC | by BrentDax |
Okay, this one is pretty scary. This script reads in a (simple) chunk of Perl code (using normal filter behavior) and spits out an HTML file with certain things colorized. (You can see the list in the setup of the %config hash at the top.) It's extremely regexp-heavy. It's also pretty easy to confuse.
Notable bugs:
1. In a line like m#regexp#; #comment, the comment won't be colorized. Sorry.
2. In a line like m{foo{1,2}bar}, the program will get confused and stop highlighting after the 2. I've really got to work on nesting...
For all that, however, there's a lot of cool things it /can/ do, like:
-recognizing and colorizing (most) heredocs
-colorizing statements like @{&{$foo{bar}}} nicely to show which curlies belong to which sigil
-actually working most of the time
Only the colors for sigils are well-thought-out--the rest were just temporary values I assigned on a whim.
Also note that this was a lot of monkeys and typewriters--I myself aren't quite sure how it all works correctly. Well, have fun with this chunk of code! |
Perl port of locate on Jul 19, 2001 at 18:41 UTC | by sifukurt |
Personally, on a Linux system, I use locate very regularly. The problem is that I have a couple Win32 systems that I use, and, of course, there isn't a tool like locate. So I wrote this. I use it quite a bit, so I thought I'd share it here in the hopes that others would find it useful. I tried to be as compliant with the GNU locate command as possible, plus I added a few simple features that I needed but weren't part of the GNU locate specifications.
There are a few variables that you'll want to alter to suit the needs of your system. Comments welcome. |
Generic Password Generator on Jun 28, 2001 at 15:35 UTC | by claree0 |
A simple (i.e. easily adapted by those with more specific requirements) password generator, which will generate passwords according to a template. Useful for environments where the use of a specific format of password is encouraged.
|
nugid - new user/group ids on Jun 25, 2001 at 13:28 UTC | by grinder |
When you have a large host with hundreds of users, managing file ownerships becomes hard. People come and go, but their files remain, and these files have to assigned to other users. By itself the chown(1) program is too severe, as it will operate on anything it can get its hands on. This can be very disruptive in a directory tree where files may be owned by dozens of individuals.
So I wrote a script that is a little more selective, in its own words, it will
Selectively modify user and group ownerships of files and
directories depending on current ownerships. All files that
match the given group id or user id will be changed to the
new specified id. Numeric and symbolic ids are recognised.
Comments, suggestions and criticisms welcomed. I think the script name sucks, but I can't think of a better one. |
Directory Tree Comparison Module (File::DiffTree) on Jun 23, 2001 at 00:23 UTC | by bikeNomad |
This is a package that I wrote after seeing some other scripts here that did similar things. This package allows the behavior on same/different files as well as comparison to be pluggable using CODE references. It may become a CPAN module if the response here is positive enough. An example program that uses it is at the end. |
Password Manager on Jun 12, 2001 at 18:12 UTC | by DaveRoberts |
A simple Tk solution to allow simple management and change of NT and Unix passwords. Designed to make management of many accounts (and passwords) a little easier.
This does not remember your passwords (yet) but provides a single interface that allows passwords to be managed. |
macls on Jun 06, 2001 at 22:00 UTC | by snafu |
Rev ver 2.5:
This script is intended to get the mtime, atime, ctime information from a file or a list of files in a given directory. This is helpful to do basic file security auditing on your system. Read the comments in the script to get usage.
Updates: Rev ver 2.5
Added CLI parsing and POD. Now in github here
Updates: Rev ver 2.1
Added locale tz tweak.
Added symbolic permissions modes to output. Many thanks to pwalker@pwccanada.com @ http://php.ca/manual/ru/function.fileperms.php. He doesn't know he helped but his code was invaluable to the symbolic mode code I added to this script.
Finally made a few changes that were suggested from Jeffa. Made other minor (beautification) changes.
General clean up.
Syntax changes that I felt made the script easier to read and more maintainable. Added GPL comments at top with (c) 2001. Subversion tags added for info only.
If there are tweaks performed on the script I would sincerely appreciate an email with a copy of the script with your changes.
Tested successfully on: Linux, Solaris, FreeBSD
Comments would be appreciated.
|
(code) Yet Another Gzip Tarball Script on Jun 04, 2001 at 03:38 UTC | by ybiC |
I wrote this ditty to automate file copies, while retaining last-modified timestamps.
- Backup system configs, web directories, and perl scripts on 4 computers.
- Make it easy to keep perl scripts synchronized across the same 4 PCs.
Create gzipped tarball of all files in specified directories. Status and error messages written to console and logfile. Selectable compression level, recursion(y/n), log and dest files via commandline switches. Tested with Perl5.00503/Debian2.2r3, ActivePerl5.6/Win2k, Perl5.6.1/Cygwin/Win2k.
Sample run logfile at tail of pod. Critique, corrections and comments wildly welcomed.
Thanks to Vynce, mlong, bikeNomad, zdog, Beatnik, clintp, Petruchio and DrZaius for suggestions, tips and pointers. Oh yeah, and some guy named vroom, too.
Latest updates 2001-06-05 14:25 CDT
Correction:
Our very own bikeNomad wrote Archive::Zip, not Archive::Tar.
|
Perl Tags generator on May 25, 2001 at 23:19 UTC | by bikeNomad |
Perl editor tags generator (like ctags) that uses the debugger hooks to avoid parsing Perl itself.
update: avoid getting into long loops when the line number happens to be 65535 because of (apparently) a Perl bug |
Work Backup on May 16, 2001 at 01:26 UTC | by John M. Dlugosz |
This is a Perl program to perform daily backups of interesting "work" files in an intelligent manner. Developed under Win32, should be OK on all platforms. |
JournalH.pl on May 15, 2001 at 21:54 UTC | by JSchmitz |
Journal maker for the lazy - more time to drink Whoop-ass and listen to Slayer. |
Unlinker.pl on May 15, 2001 at 00:03 UTC | by JSchmitz |
Uses Perl's unlink to auto-delete numerically named files. Needed this here at work.
|
ping for ppt on Apr 23, 2001 at 00:29 UTC | by idnopheq |
ping -- send ICMP ECHO_REQUEST packets to network hosts
ping tests whether a remote host can be reached from your computer.
This simple function is extremely useful as the first step in testing
network connections, ping sends a packet to the destination host
with a timestamp. The destination host sends the packet back.
ping calculates the time difference and displays the data.
This test is independent of any application in which the original
problem may have been detected. ping allows you to determine whether further
testing should be directed toward the network connection or the
application. If ping shows that packets can travel to the remote
system and back, the isse may be application related. If packets can't
make the round trip, the network may be at fault. Test further.
Added actual ~pinging~ sound via the system bell, per the cononical naval usage of the term and the jargon file's entry (see the -a -A options).
The funniest use of `ping' to date was described in January 1991 by Steve Hayman on the Usenet group comp.sys.next. He was trying to isolate a faulty cable segment on a TCP/IP Ethernet hooked up to a NeXT machine, and got tired of having to run back to his console after each cabling tweak to see if the ping packets were getting through. So he used the sound-recording feature on the NeXT, then wrote a script that repeatedly invoked `ping(8)', listened for an echo, and played back the recording on each returned packet. Result? A program that caused the machine to repeat, over and over, "Ping ... ping ... ping ..." as long as the network was up. He turned the volume to maximum, ferreted through the building with one ear cocked, and found a faulty tee connector in no time.
Requires up-to-date Net::Ping and Time::HiRes. Many thanks to Abigail's neat warn and die subs from the http://language.perl.com/ppt site!
Two items anyone can help with:
- Sig{INT} control for Win32 - Term::ReadKey (which I could not get to work right for this one script)?
- TTL on various systems, like Win32. I know the default from the RFC, but the reallity and how to automagically query?
UPDATE: ACK! I posted an old version w/o the ping sound! Here it is.
UPDATE 1: submitted to PPT for inclusion |
md5sum for PPT on Apr 13, 2001 at 01:35 UTC | by idnopheq |
md5sum computes a 128-bit checksum (or fingerprint or message-digest) for each specified file. If a file is specified as `-' or if no files are given md5sum computes the checksum for the standard input. md5sum can also determine whether a file and checksum are consistent.
For each file, `md5sum' outputs the MD5 checksum, a flag indicating a binary or text input file, and the filename. If file is omitted or specified as `-', standard input is read.
I added functionallity from BSD md5 and some other md5sum ports (MS-DOS, etc.) for compatibility. FSF md5sum is the default. |
TAR.pl on Apr 11, 2001 at 20:06 UTC | by $code or die |
I wanted to find a simple command line TAR\GZ program for Windows to GZip my modules for uploading to CPAN. But I couldn't find one I liked.
Of course there is always cygwin which I urge everyone to download who use "explorer.exe" as a shell. It's only a small download.
However, this fitted my needs. Please let me know any comments or improvements.
Update: You might also want to check out btrott's script which does the same thing but also stores the path. I didn't see this before I posted mine. |
Gantt Diagrams on Apr 02, 2001 at 10:56 UTC | by larsen |
Simple module to produce Gantt diagram from
XML project descriptions.
Or, what Perl can do to help you if you're planning to conquer the world?. |
detect sneaky processes which modify their process name. on Mar 26, 2001 at 06:11 UTC | by rpc |
This script walks through each PID in /proc and performs several checks to determine whether or not a process has modified its process name. It's trivial for a program to mung its process name and fool utilities such as 'ps'. There's many malicious tools available which try to hide their pressence, using more common process names like 'pine'. However, if the binary itself was not invoked with this name, it's possible to detect using the /proc interface. |
pinger - ping a range of hosts on Mar 23, 2001 at 19:19 UTC | by grinder |
A little script that provides an easy way of pinging all
the hosts from, e.g. 192.168.0.1 to 192.168.0.100. The
output can be tailored in various ways. |
vargrep on Mar 13, 2001 at 01:21 UTC | by japhy |
A raw attempt at scanning a Perl program for specific variables. Results are usually good. |
Week Partitioner on Mar 09, 2001 at 02:02 UTC | by japhy |
This program takes a month and a year, and returns output like cal, for the days of the month between Monday and Friday. |
chmug on Mar 07, 2001 at 04:01 UTC | by tye |
A perl-only replacement for chmod, chown, and chgrp that I found very convenient when I was a Unix sys admin. It lets you change the mode, owning user, and owning group all at once (or any combinations thereof).
This is some pretty old code (last updated in 1995) but it doesn't look horrendous so I thought I'd add it to the archive. |
Tcl/Tk to Perl Tk on Feb 27, 2001 at 06:06 UTC | by strredwolf |
A rather crude converter, but if you design it with wish,
and then change it over, cuts things down a bit.
|
Find and convert all pod to html on Jan 30, 2001 at 01:33 UTC | by ryddler |
Searches the site and lib directories on an ActiveState install for any POD that isn't in HTML format and converts it. Rebuilds HTML TOC after conversion. A logfile is kept to track additions. |
shrink.pl - Scales down images on Jan 21, 2001 at 23:28 UTC | by Vortacist |
This is my first major perlscript (with help from Falkkin)--it scales down the size of images in a specified directory and all of its subdirectories. This is not a compression algorithm--it simply resizes the images based on command-line options. The user may specify size (as "xx%" or a number of pixels), starting directory, and which types of image files to resize. The user is required to specify a size; if none is given, the online help message is printed. Please see this message for more info.
I tend to do a lot of image-resizing for CD-ROM scrapbooks and thumbnails and thought other people might find this script useful for similar tasks.
I would appreciate any suggestions on how to make this script more efficient, and I'd also like to know if the help text is clear enough. Thanks!
Update: Changed code as per merlyn's suggestion below, with one slight difference (see my reply). |
csv2png.pl - Line Graphs from a CSV file on Feb 16, 2001 at 23:14 UTC | by clemburg |
This is a small script that takes CSV data as input
and generates a PNG file displaying a line graph as output.
Basically, each column in the CSV file is a data series that
will be displayed as a line graph. The first series labels the X axis. |
(self-deprecated) slack updater on Jan 12, 2001 at 05:22 UTC | by mwp |
Deprecated: If you're looking for a script to do
this for you, I highly recommend autoslack, written by
David Cantrell (of the Slackware Team). It can be found in
the unsupported directory on any slackware mirror.
A relatively simple script that I'm writing which scans a
local, uninstalled copy of Slackware 7.1 and updates the
packages from a slackware-current mirror. Very rough around
the edges, so be gentle. Gives you the option of installing
downloaded packages but is not integrated with /var/adm/packages
info in this version. Useful to a point, mostly written for
myself only because Patrick Volkerdeing & Co. are writing a
script named 'autoslack' (in Perl!) with the same exact
functionality, 'cept probably better! I was just impatient.
Good example of Digest::MD5, following our recent
discussions!
|
Sun Fingerprint on Jan 05, 2001 at 08:37 UTC | by a |
Uses Sun's on-line md5 database to validate your Solaris
system files (exe and libs). As in, checking for
trojans or just getting
the proper version numbers. You could run it in a pipe yibC
w/ find or ls, e.g.
ls /bin | sunfingerprint -
has various levels of output (even the whole, original html)
so you could do various things to see what it
complains/notices.
|
mkpkg on Jan 03, 2001 at 19:25 UTC | by kschwab |
A wrapper script for pkgmk on Solaris to create Solaris
software packages. (pkgmk and pkgtrans are overly complicated...this simplifies things for me) |
switcher on Dec 31, 2000 at 01:18 UTC | by $CBAS |
woohoo! my first code on PM! :-)
This little thing sweeps through directories to change the extensions of files (I use it to rename .MP3 files to .mp3 ... damn AltoMP3!)
|
tidyhome on Dec 17, 2000 at 01:02 UTC | by billysara |
A little script to tidy users home areas by moving files
from ~/ to subdirectories related to filetype. Written so
I didn't need to spend my time tidying up all those .tar.gz
files I accumulate from freshmeat...
|
col for PPT on Nov 30, 2000 at 04:06 UTC | by chipmunk |
This script is a Perl implementation of the Unix col utility, which filters reverse line feeds from input. (More details: col manpage) It was written for the Perl Power Tools project. Sadly, the PPT webpages have not been updated since I submitted this script. So, I'm posting it here, because I don't want the script to go to waste.
When I wrote this script, I had access to two different implementations of col, one on BSD and the other on IRIX. I added new command-line options so that, in most cases where the BSD and IRIX implementations behaved differently, my own implementation could be instructed to emulate either.
Although there are no known bugs in the script, there may be bugs that I am not aware of. Please let me me know if you find any. Comments and suggestions are welcome as well.
P.S. Also, if anyone has a use for this script, please let me know. I'd never even heard of col until I saw it listed in the PPT!
|
onlyone on Oct 31, 2000 at 01:58 UTC | by BoredByPolitics |
This is my second perl program, written to fulfill a need at work. I was wondering if someone with more experience could critique it, there being no other perl programmers in my office :-)
It's aim is to be run as part of the .bash_profile, to remove any prior D3 program running from the user's IP, to solve the problem of rebooted clients at a customer of ours (stopping them rebooting wasn't an option).
Oh, almost forgot, it's also supposed to run suid root, as the users often use different login names from the same workstation. |
Find Duplicate Files on Jan 04, 2001 at 23:08 UTC | by salvadors |
As my original Find Duplicate Files script was so popular I decided to take the advice of turning it into a module. Here's the initial verison of it. I'd appreciate feedback on ways to provide a nicer, more useful, interface than just returning a HoL.
Thanks,
Tony |
PMail on Oct 02, 2000 at 17:47 UTC | by le |
This script takes a Unix Mailbox (in so-called Berkeley format),
and generates several HTML files from it. (Just as many
Mail-to-HTML programs do.)
The output looks almost like the one from hypermail.
Yeah, I know, you might say "Come on, this is mail2web no. 2365, who needs it?"...
well I just did it for learning purposes. Maybe you can learn, too. |
syscheck: check system files for changes on Sep 11, 2000 at 02:05 UTC | by cianoz |
this program checks files and directories in your system
reporting if they were modified (by checking an md5 sum)
created or deleted since last time you initialized it.
I use it to check my /etc /sbin /lib /bin /usr/bin and /usr/sbin
for changes i didn't made (backdoors?)
you can run it from a cron job or manually
(it is safe to store a copy of the checksum database outside the system)
it takes about 2 minutes to scan all rilevant files on my systems.
although it seems to work for me it is just a quick hack,
i would appreciate some hints to make it better.
|
expirescore.pl on Sep 02, 2000 at 02:29 UTC | by le |
I use the slrn newsreader and my scorefile got to
big (it was over 2 MB), with a lot of old scores,
because I didn't set expiration
times for the scores, so I hacked this script, that reads
in the scorefile and lets me expire scores
(based on Time::ParseDate features like older than
"5 days", "3 months", "2 years"), and writes back the scorefile.
Currently, it only runs interactive, but maybe I (or you)
will add the needed features to run without user
interaction (e.g. for cron scripting).
|
Pathfinder - find duplicate (shadowed) programs in your PATH on Aug 26, 2000 at 09:12 UTC | by merlyn |
Run this program (no arguments) and see which items in your PATH environment setting are shadowing later programs of the same name. This is an
indication that you might get failures running the scripts of others, or perhaps if
you ever rearrange your PATH. |
Find out where symlinks point on Aug 09, 2000 at 15:31 UTC | by merlyn |
Walks through one or more directories specified on command line, and fully expands any symbolic links within those directories to their real locations, taking into consideration all the relative and absolute symlinks that occur, recursively.
Originally written for a
Performance Computing/SysAdmin magazine Perl column. Go see there for a line by line description. |
wanka on Aug 08, 2000 at 23:57 UTC | by turnstep |
Just a simple hex editor-type program. I actually wrote this
back in 1996, so go easy on it! :) I cleaned it up a little to
make it strict-compliant, but other than that, it is pretty much
the same. I used it a lot when I was learning about how
gif files are contructed. Good for looking at files byte by byte.
I have no idea why it was named wanka but it has stuck. :)
|
What's eating all your disk space? on Jul 12, 2000 at 03:25 UTC | by hawson |
I'm constantly having to clean out space on lots of computers, and looking at several screens of 'du' output hurts. So I wrote this little script to parse and format the output from 'du'. I know, I know, it's not strictly perl, but monks should be aware that there are thing that exist outside these cloistered walls.
N.B. Since this is meant to be used in a pipe, it's usually all on a single line, and without comments. |
Simple Calculator on Jun 29, 2000 at 01:20 UTC | by ncw |
A simple perl calculator for use on the command line. See code for instructions. |
ButtonFactory on Jun 28, 2000 at 16:40 UTC | by t0mas |
A package that creates custom png buttons.
|
UserId checker on Jun 10, 2000 at 04:58 UTC | by brick |
This program is meant to munge through N passwd files and
check for logins with multiple UIDs, UIDs with multiple
logins, and logins with a UID of zero (0) that are not root. |
pcal on Oct 22, 2000 at 05:45 UTC | by japhy |
pcal displays calendar information for the next X
months (it defaults to just the current month). With just a
little modification, it could be made to act exactly like the
cal utility found on most Unix machines. |
report ipchains DENY entries in /var/log/messages* files on Jul 28, 2000 at 07:07 UTC | by Anonymous Monk |
Creates and mails a report on ipchains DENY entries in the
/var/log/messages* files. The script remembers the last
entry from the last run so you only get new entries on the
next run. Add it to your crontab to generate periodic
reports. |
ppm.xml made more human-friendly on Aug 03, 2000 at 11:52 UTC | by Intrepid |
Please use caution!
I have found out from external sources that in XML, whitespace outside
of tags can be significant, and that although the present version of
PPM does not mind how my script changes <CITE>ppm.xml</CITE>, a future
version may. It might be better, therefore, not to use this script.
The Perl Package Manager (with ActivePerl) writes to a file named
ppm.xml each time a module is installed using PPM. It doesn't add
nice whitespace to the file, however, resulting in a mess if one
ever needs to or wants to (out of curiosity) go in there and see
what has happened before. This little (slightly "anal" :)
utility script can be called from the very end of <CITE>PPM.bat</CITE>
(another file that comes standard with the installation of
ActivePerl) by adding the line:
CALL fixPPMXMLfile
to the file below the lines that match this:
__END__
:endofperl
at the end of the batchfile. This kicks off your cleaner which
adds some whitespace to make more human-friendly a file which,
it is to be admitted, is ordinarily only machine-read. |
Killer on Aug 04, 2000 at 18:56 UTC | by ergowolf |
replace the word process with what you are looking for. I
used this script to kill a particularly tough process
at work. I am sure this is the hard way, but its how I did
it and it works. I would like to see the easy way though. |
NIST Atomic Clock Time on May 11, 2000 at 02:50 UTC | by reptile |
Uses LWP::UserAgent to get the current date and time (Eastern) from the NIST Atomic Clock website at www.time.gov
This code is public domain.
|
Cropping a postscript file on May 08, 2000 at 21:40 UTC | by ZZamboni |
This script allows you to select a portion of a postscript
file and crop it. The whole file is included in the result, but
the appropriate commands are included to crop it to the section
you select. Instructions for use are included in the script. |
Workaround Cern for Amanda status on Apr 15, 2000 at 23:34 UTC | by providencia |
This was created because the webserver is ancient that runs
the internal operations page and cannot run cgi as it's owner
So I couldn't create this as cgi(that's what our previous web-
master told me). So this program runs as root and still
gives the same information.
It's a specific solution to a specific problem, but you never know
who may find it useful.
|
c2printf on Mar 09, 2000 at 16:38 UTC | by stefan k |
This script takes a C source code file as input and converts
it to C source code that will printf exactly that file.
I uses an array of translation rules so it should be expandable
(and maybe they're not complete yet?).
If you use this together with a little elisp snippet that
puts the output of a programm into the current buffer it is
quite useful.
|
DHCP and DNS Compare on Mar 30, 2000 at 20:14 UTC | by Hyler |
For Windows NT administrators. Compares DHCP server and DNS server info to see if they match reasonably. Requires NT Resource Kit installed. |
Tarball Cleaner on Feb 02, 2000 at 10:28 UTC | by Elihu |
Call this with a tarball to remove the contents of the tarball. Very useful if the tarball dumps in the current directory. |