in reply to SFTP->RPUT Crashes
One of the practical issues with large volume SFTP is that if your network isn't clean, it is pretty much guaranteed to fail somewhere during the transmission.
Any transmission protocol that relies on a bunch of really large files to successfully complete in sequence is a failure in design.
Better would be a design that 'put' files individually, and then would retransmit if individual files fail. The protocol then works through networking issues via sheer dogged persistence.
My 0.02 anyway. This isn't a knock on 'rput', which would work if your files were small, but more a reality check on the issues with transmitting many gigabytes sans error.
David.
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^2: SFTP->RPUT Crashes
by salva (Canon) on Feb 20, 2012 at 23:01 UTC |
In Section
Seekers of Perl Wisdom