I would recommend it because it's the least memory-intensive approach, and you're potentially using a lot of memory already. (Best would of course be to read the file line-by-line, but since you can't do that, this is the next best thing I could come up with.)
As to the other, you didn't stipulate what you wanted to do with the records when you had them split, and I needed a dummy subroutine to show the outline right (and to note that your data is found in $1). Blame the chatterbox for the odd word choice. :-)
If God had meant us to fly, he would *never* have given us the railroads. --Michael Flanders
sysopen(DF,"test.txt", O_RDONLY | O_CREAT);
sysread(DF, $rec, -s DF);
# Split up records into array.
# Will lose \n on recs - add later.
# Do some work on @test
Work... Work... Work...
# Build up into a single record, putting \n back in
sysopen(DF,"test.txt", O_WRONLY | O_CREAT);
So basically I want to make this as efficient as possible. I have to use SYSREAD and SYSWRITE to make the IO as optimal as possible. ie: Reading a 1 meg file would take 2000 IO processes if buffered!!!!! SYSREAD will in effect take 1!