A couple of things to think about...
First, people *will* find ways around whatever filters
you put in place. If you just want to stop the worst
of the nasty words from showing up, you'll probably
do OK at that, but your users will undoubtedly come up
with new ways to say the same thing that won't be caught
by your filter.
What are you really trying to accomplish here? If it's
a "letter of the law" situation, you've got
half a chance. If you're trying to stop participants
from communicating naughty ideas, you will fail.
Then, to make matters worse, the more words and permutations
you try to filter out, the more false negatives you'll
catch.
Context is everything -- several years ago, AOL drew some
bad publicity when breast cancer survivors were repeatedly
dinged for using bad language in chat rooms and user
profiles.
That's one example.
George Carlin sets forth several more in his "Seven
Dirty Words" bit, like "You can prick your finger,
but don't finger your prick"
The more questionable words you try to block, the more
legitimate conversation you will block unintentionally.
And the more clever your users will get in their attempts
to sidestep your bot.
Automating analysis of the English language is not
something that can be done with a few perl regexps.
Work on your bot, sure, but consider using it to
alert a human who can read the questionable content in
context and take action, rather than having the bot
take action all by itself.