Ok, all fair enough and a good challenge to me. However, you keep talking about webservers and the relevant standards or RFCs. In my case, though, what was I supposed to read? The only standards I can think of are things line the Unix or Linux file system hierarchy standard, and that's not relevant to my concern. (At least, if it is, the connection isn't obvious to me.)
I ended up doing this because I was rereading Intermediate Perl, and I kept thinking about the general problem that file system crawling presents: What do you do when (1) you need to drill through a structure, (2) you have no way to predict in advance how far down it goes, and (3) at each branch the items you find may be a simple thing (a file) or a complex thing (a directory containing zero or more directories and zero or more files)? How do you build a map of such a structure in code? Once you have the map, how do you reorganize it for printing? Perhaps you don't want to print it, but you want to extract one piece of information (the byte count) about one of the types of thing in your map (the files). How do you do that most efficiently? The majority of what I do involves files and folders: scanning them for specific types of things, checking their size, updating them, etc. So it's a problem I need to care about.
If I understand you correctly, you're saying that my time would have been better spent reading through the code in File::Find. Is that your point? If not, then I would be curious to know what you would recommend I do. But, please, no more webservers. I'm not writing one. I promise.