Keep It Simple, Stupid | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
Most sensible creators of files that size, make them binary data with fixed-sized records. That way, individual records can be read without having to process the whole file in one serial progression. Or in one go. Then all you need is a) the ability to see the remote file system from the local machine; b) the ability to seek or sysseek to positions > 2GB.
So yes. Given the right infrastructural access to the remote file, and a suitable build of Perl, you can access that remote, terabyte-sized file using Perl. Access will be relatively slow compared to local disc, but that's inevitable. Depending upon the nature of the processing involved, it may make sense to pull chunks of the file across to local disc for processing, before writing in-place back to the master; if required. If accessing such large remote files is likely to become a regular requirement, and your machine is physically close (<100m) to the remote server, then setting up a dedicated 10 gigabit network connection is a possibility; though relatively expensive. A couple of 10GBASE-T cards ($1000), and a few meters of cable, and you can have a dedicated network between you and the remote file that runs fast enough to rival a locally attached disc. Whether the local and remote hardware is capable of exploiting that bandwidth is another question. Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
In reply to Re: How to read 1 td file
by BrowserUk
|
|