If I read from a pipe (pipe open), then it fails on close if pipe is broken -- this is why I check exit status of close.
Yes, checking the return value of close is very important on piped opens - as per its documentation:
close $fh or die $! ? "Error closing pipe: $!"
: "Piped open exit status: $?";
when data read from local file is broken in the middle of the process of reading
It depends a bit on what "broken" means - if you are always reading from a pipe, then yes, you should be able to detect this condition. However, if you're reading from a regular file, then I don't think the reading program will be able to detect a failure in the writer, if that is what you mean. If the file format allows for any kind of sanity checks (like headers with record lengths or other well-formedness checks), or even checksums, then I would use those.
| [reply] [Watch: Dir/Any] [d/l] |
This is a program that computes data for manufacturing in company. I have to be sure that when data read from local file is broken in the middle of the process of reading, then the whole program fails.
I have worked with many HD's that have either hardware errors or file system errors (usually both types of errors occur at the same time). Your original Perl program will abend with a fatal error if the complete HD file cannot be read. It is possible to open a file which cannot be successfully read to the EOF.
close() does a lot for a "writer" -> flushes unwritten cached stuff to the HD. Not so much is done for a reader. A close() on a read handle does not modify the actual file that is being read. Again, that is not true on a write handle.
Your original program will fail if the file is corrupted. Let's say that happens, then what is your "Plan"? A redundant disk system is probably what is needed.
| [reply] [Watch: Dir/Any] |