We don't bite newbies here... much | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
Thank you for the welcome and the idea, Athanasius. It is a possibility to do some post-processing only on text-files, if there are no other options. I'll admit that I'm not keen at the thought of slurping large files (2+ megabytes) into memory again and doing a regex replace like the following: $fileguts =~ s/\r{2,}\n/\r\n/g;That should reasonably efficient at the process. Your idea does raise another thought though: avoiding the mangling of files which come out of unix-based systems, changing \n to \r\n. It might be preferable to do something like: $fileguts =~ s/\r+\n/\n/g;In the interest of not potentially mangling files - for the moment I will continue to hang out in the hope of another, MIME::Parser-based fix. :) Cheers! PS: Another possibility might be to change the original MIME message before writing to disk, say from: Content-Type: text/plain;To: Content-Type: application/x-msexcel;A bit of an ugly hack to trick MIME::Parse, though probably doable. And might be preferable to the extra disk-load/regex-replace/disk-save cycle. While I'm not expecting hundreds of files per minute/second, it is best to assume that something like that might happen if an ISP error suddenly causes a surge or someone attempts a DoS/mailbomb attack. In reply to Re^2: Is it possible to force MIME::Parser to extract text-files on a Windows system without the extra CR's on the end of lines?
by WilliamDee
|
|