Keep It Simple, Stupid | |
PerlMonks |
Re: Re: Re: FTP and checksumby Elian (Parson) |
on Oct 03, 2003 at 13:53 UTC ( [id://296242]=note: print w/replies, xml ) | Need Help?? |
No, in fact they do not guarantee error-free transmission. Ignoring the possibility of multibit errors that result in the same checksum with bad data (IP checksums aren't particularly rigorous, on purpose for speed reasons, and errors can result in bad data that still matches the checksum) the TCP checksum is a per-hop checksum, as routers may, and some do, recalculate and reset it when sending packets on to their next destination.
Checksums are generally done as packets come into a router, on over-the-wire data, to validate the packet, and will note some (but not all, by any means) errors. Packets then hit router memory, and if the checksum is regenerated it's done against the in-memory copy. If this in-memory copy is corrupt, for example because you have a bad RAM cell, transient power issues, or just cosmic rays, the checksum will be generated against this now-corrupt data and there will be no way to detect, as part of the transmission, that the data has gone bad. ECC and parity memory, if the router has it, will catch some, but again not all, instances of this. This isn't theoretical. I know of cases where this has happened, and the only thing that caught the fact that the data was being corrupted in-transit by a router with bad memory was the fact that DecNET does do end-to-end checksumming of files transfered and it was yelling about bad data transimissions that the TCP streams didn't note. If the data is important enough to go to some effort to validate the destination copy, then there's also the non-zero possibility of some sort of man-in-the-middle data alteration. You can certainly argue that failures or attacks such as this are really, really unlikely. On the other hand, do you want a financial institution trusting that it won't happen when moving transactions against your bank account?
In Section
Seekers of Perl Wisdom
|
|