Here are a few specific failures that are on the “flush list” (if I may borrow your quaint term ...) of ways in which we can make most web sites that we encounter drop to their knees. How many of these apply to you?
(1) Duplicate the GET string in the URL, converting single values into multi-valued ones.
(2) Alter the values in a POST, such as variable names for non-keys. NULL values often get posted instead. In one case, we succeeded in wiping-out a customer’s name in this way. (All validation was performed on the client-side; none by the server.)
(3) Submit two POSTs within milliseconds of each other, before the reply from the first can be returned. The two POSTs execute simultaneously, and nearly every web site out there (for some reason) uses MySQL without benefit of SQL transactions. If you alter the value slightly in the second POST, it usually fails in some way because the first POST (still in process) has altered values just ahead of it.
(4) &is_admin=1. Stupid how often that works.
(5) Fiddling the values of hidden fields, which are often used to communicate not only state, but data. As a cardinal rule, the client should never be trusted, since the host (when dealing with a deliberate attempt at sabotage) cannot even know whether the intended client code is running at all.
(6) Observing the sequence of JSON calls that routinely occur and simply running those in an unpredictable sequence, especially running calls before their pre-requisite calls have occurred. It becomes pathetically obvious that the correct operation of the host depends utterly on the correct operation of the client. (In none notable case, the POST exchange simply got bigger and bigger, and, sure enough, the final JSON call would happily commit to the database anything that it received. Every single thing was fully exposed to the client.)
(7) Mobile app developers, especially non-hybrid apps, innocently assume that no one can read their traffic, and that no one but their intended client can submit to their host. They fail to use SSL, and fail to check for the presence of debuggers.
In short ... code that is accepted, and released to production, based only on time-strapped programmers verifying that the app or site “seems to work” when they ... who of course know it the very best ... use it in exactly the “right” way. And, in particular, blithely assuming that no one out there possesses a comparable level of technical knowledge and intends to do them harm.
Now, in starting this Meditation, I did not intend to merely hear my own head rattle. I was frankly hoping that other Monks, instead of merely (and Anonymously, and repeatedly?!) referring to “poo,” would share some stories of their own ... beyond a mere blanket reference to OWASP, which of course is a resource well worth bookmarking. Every one of us has to build and maintain apps or sites that are robust and secure, even against deliberate attempts to cause mischief. Yet, the Internet is apparently full of sites that can be dropped (and in ways that promote widely-read magazine articles) using nothing more than a debugger.
What are your stories, and what did you do about them? Did it prompt you to use a particular CPAN module, a particular framework, or to adopt a particular Best-to-you Practice?
|Replies are listed 'Best First'.|
Re^2: Crash-Test Dummies: A Few Thoughts on Website Testing
by Your Mother (Bishop) on Oct 12, 2015 at 14:31 UTC