|No such thing as a small change|
(OT) Security Rantby Ovid (Cardinal)
|on Dec 05, 2001 at 02:56 UTC||Need Help??|
Without pointing out specific threads, I have noticed that some of the threads on this site offer questionable security advice. In fact, one recent node specifically alluded to the fact that it was offering insecure advice in order to implement an easy to use solution. This is bad. Deliberately introducing any weakness in security compromises the security of the entire system even if you know that it doesn't.
You Don't Know All of the Problems
Security can be terribly complicated and just because you've added all of the latest patches, kept up-to-date on your virus updates and have a rock solid firewall doesn't mean that some clever hacker won't find a new loophole tomorrow. Even if there are no loopholes in the system architecture, social engineering is still a common exploit. ("Hello, this is Joe at the help desk. I need your password to test foo."). In fact, the larger your system or organization, the more significant these concerns become.
Case in point: on a recent software project, I had users logging in over a secure connection and a session ID is returned to them and sent back via a cookie. Since I set it up so the cookie can only be returned via a secure connection, I really don't need session IDs, right? I can just have the user store their password in a cookie and return that. It's easier to implement.
Of course, that's the wrong attitude to take. A few months after this system was up and running, we needed to port it to a new server. Someone forgot to install a new SSL certificate and I was (ugh) ordered to change the program to return the cookies regardless of whether or not the connection was secure. If I hadn't used session IDs, the cookie would always be sending the password. Since I did use session IDs, this password is not sent. Of course, the initial login still had the password sent over an unencrypted connection, but at least if someone starts sniffing the connection after the initial logon, they won't see a password.
So, we use session IDs. These don't remain static for a session! I issue a new ID every connection to lower the likelyhood of someone hijacking the session by submitting an old ID. This still isn't perfect, but defense in depth is the key here. Naturally, this all broke down by losing the SSL certificate, but it's better than nothing.
Multiple Paths for Failure
Another example is a story a friend (lemming) related to me about being asked to do QA on a product that had 17 binary switches. He was asked to test all possible combinations. That doesn't seem to hard until you realize that there are 217 (131072) states. Assuming he could test one state a minute, it would take three months to test every state! And that's assuming that all tests pass (or if they fail, that we ignore this during the testing). With 131072 paths of potential failure, we can easily see how security could get complicated. Of course, if you have a product with only 17 binary states, that's probably not a big product.
Now, if you add into this equation the problem of bringing on maintenance programmers not familiar with your security features, possible security concerns in your firewall, servers, databases, etc, you quickly begin to get an idea of the scope of the problem. You can't prove a system secure.
Security by Obscurity
Generally, people view security by obscurity (SbO) as a bad idea. Not necessarily. SbO is great if it's an additional security measure. Do you really want to hand people the names of the tables in your database? Do you want to hand them the field names of those tables? If you have perfect security, it shouldn't matter. Unfortunately, since you can't prove your system secure, it does matter. I have shown more than one person how I can destroy their database with a well-crafted URL. Of course, that means that I need to have a reasonable guess as to their table names (amongst other things).
One of the more interesting problems with security by obscurity relates to usernames. I can scan through many corporate Web sites and pick up user names just by reading the email addresses. If I see a standard on how those are being implemented, I now can extrapolate more user names. Since many people have the same user name on their email as they do for other systems, this can be useful. Often, getting a list of user names can be half the battle of cracking a system. Your email shouldn't be giving away that information, either.
Why Would Anyone Crack Me?
Because they can.
Maybe you don't think that this matters, but what if they crack your box and use this to launch attacks on other boxes? Sooner or later, some large corporation suffering from a distributed denial of service attack is going to consider suing admins of poorly protected boxes for damages. If that suit isn't tossed out of court, can you afford to lose that suit? Heck, can you afford to win that suit? That could easily be a Pyrrhic victory.
What if they crack your box and get your passwords from it? No matter, you and all of your users would never think about reusing your passwords, would you? Of course you do. Now, the cracker has a list of user names and passwords. Enough digging around and the cracker is going to find some use for them.
Sometimes we offer bad security advice. Sometimes we make poor security choices. If you see this, take steps to remedy it. How often do those "Crack My Box" contests walk away with no winners? Not very often. But let's face it, those are boxes where every effort has been made to ensure their safety. If you don't make every effort, you're vulnerable. If you're not a security expert (and I'm not), don't take short-cuts. If you are a security expert, then you're already not taking short cuts.
Join the Perlmonks Setiathome Group or just click on the the link and check out our stats.