Everything that blue_cowdawg
says is accurate. There are really only 2 ways that you could make things appreciably faster:
- compress the file before the copy, or use a copy method that compresses as it copies
- split the file into pieces, and do the transfer in parallel (and reassemble it afterwards). In some cases, a server won't allow a single connection to use more than a pre-set transfer limit, but will allow multiple connections that can sum to more than the single max rate. As blue_cowdawg also points out, this method won't help if some intervening hop ends up being the bottleneck
With no more info than 'a server in China', it's hard to be more specific. However, given the file size, and the likely sub-optimal transfer rate, it would be wise to use something like Net::FTPSSL
, where the 'put' command allows you to pass an offset of -1, and it will attempt to resume where it left off.