Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl-Sensitive Sunglasses
 
PerlMonks  

Re^3: Reading binary file in perl having records of different length

by johngg (Canon)
on Jun 18, 2014 at 10:18 UTC ( [id://1090286]=note: print w/replies, xml ) Need Help??


in reply to Re^2: Reading binary file in perl having records of different length
in thread Reading binary file in perl having records of different length

I don't have a production grade binary yet

Since you are dealing with binary data I don't think your "eyecatcher" is a good idea as "\x3d\x3d" ("==") could legitimately be part of your data. I think it better to rely on a record starting with a byte count immediately followed by a fixed length header string that can easily be identified and validated, perhapd by regular expression, e.g. /^Record\s\d{5}$/ for "Record 00001", "Record 02784" etc. The chance of such a string appearing in the binary data is very much less likely and should make unravelling bad records far easier.

I don't know if you have any control over the format of the binary files but I feel that the "==" between records is just storing up trouble and should be reconsidered. It is too short to be unlikely to appear in the data and, by preceding the record, adds complications to record alignment.

Cheers,

JohnGG

Replies are listed 'Best First'.
Re^4: Reading binary file in perl having records of different length
by sundialsvc4 (Abbot) on Jun 18, 2014 at 21:12 UTC

    I agree.   I think that you should code this routine to be suspicious of the data, but also to completely rely on it.   For example, I presume the first two bytes of the file should be an eyecatcher:   die if they’re not.   The next two bytes should decode to a plausible length ... die if they don’t.   Read the specified number of bytes ... die if you can’t.   The next thing that you read should either be “nothing” (end of file), or it should be an eyecatcher, rinse-and-repeat.

    Notice that, in this way, “if the program runs successfully, then you can indeed assert that the file’s structure must be good.   Since big files can and do become corrupt sometimes (and come from other people’s software systems), this amount of caution is not paranoia.   Not at all.   (In fact, in a production setting, I would have a series of .t test-files that prove, and re-prove, that all of these die calls actually work.)

    There will be no harm in simply reading two bytes, then two bytes, then n bytes, and so on, letting Perl and the filesystem handle all of the buffering for you.   It really doesn’t matter how big the file is.

      Thanks, great suggestion. I just wanted to digress and say that I am QA and the purpose of this script was to automate 70 odd test cases but all your suggestions point towards best practice and I really do appreciate that.

      I will add the necessary error checking and if I run in to any issues, I will report for your guidance.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1090286]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others chanting in the Monastery: (7)
As of 2024-04-18 07:56 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found