|
|
| Think about Loose Coupling | |
| PerlMonks |
Re: strict isn't everythingby brianarn (Chaplain) |
| on Jun 11, 2002 at 16:21 UTC ( [id://173649]=note: print w/replies, xml ) | Need Help?? |
This is an archived low-energy page for bots and other anonmyous visitors. Please sign up if you are a human and want to interact.
I've written a few CGI upload scripts before, so I'll give it a shot - no guarantee of quality though. :) If the programmer never talked to the admin about where the file will be written to, then the webserver may not have write permission to that directory, ensuring that all open calls fail and the success isn't checked by anything like open(OUTPUT,">>$outputFile") or die "Couldn't open $outputFile: $!\n"; The fact that the regex it runs against the _file_name only checks to see that it's not all whitespace - this opens a nice hole for someone to try and slip in a malicious system call with backticks to do God knows what. I'm guessing somewhere in the code above it instantiated the $query CGI object, and yet it's trying to redefine $CGI::DISABLE_UPLOADS and $CGI::POST_MAX after the fact - these would have to be redefined before the $query object is created. If not, one of two things will happen:
There are checks in the file printout loop to ensure that the file isn't bigger than 3MB in size. If POST_MAX was set properly earlier, then this wouldn't have been a problem. In the odd instance that the admin has POST_MAX set higher than 3MB, this would at least keep the filesize under the POST_MAX. However, it also just stops printing out the file, so if this file is a friggin' huge Word doc, it'll be corrupted. The file size checks in both the loops and in the end check could be written much more simply with -s $outputFile rather than statting the file, saving the results to an array and then checking the 7th element (well, 8th element if you count the 0th index as the first element) It does another check after it has written the file to disk, and if the filesize exceeds 3MB, then it removes it. However, it doesn't ensure that the file was removed properly with something like unlink $outputFile or die "Couldn't unlink $outputFile: $!\n"; Seeing as the file was opened in append mode, if someone uploads a 2MB resume, then tweaks it and uploads it again, we have problems. In the loop where the file is written to disk, it appends to the old resume anything it's received up until the resume file size hits 3MB, then it stops writing, which would pretty much corrupt any non-text resume, as well as really mess up any text resume. The contents would be the old one, plus however much of the new one it could fit up to 3MB. Seeing as it tries to open the file in append mode, never checking for existance of the file or overwriting it, once a person hits their 3MB resume limit, they can't do anything in terms of uploading a revised resume. The loop keeps the file size under 3MB, so really the check at the end that unlinks will never be called. It seems that most of the errors won't be encountered due to a prior error causing the script to crash. Reading through this script makes me feel better about my CGI-fu because now I know that I at least understand enough basics to not be destroying anything. :) ~Brian
In Section
Meditations
|
|
||||||||||||||||||||||||