I would think that if the bytes being written are truly random, and you're viewing each of the fields independently, then shouldn't the probability of corrupt data being mistaken for a "marker value" always be 1 in 2**32, no matter what marker value you have chosen for that particular field? But since I suspect the data isn't truly random, and it also seems like you're asking about some combination of events, then more specific definitions of these are probably necessary (e.g. what is "total random corruption" and "if the whole 4GB were corrupted"?).
Is the only purpose of saving the offset with the value to detect errors? If "Corruption could write any of the possible values anywhere", wouldn't that mean that a bad write to a 4-byte value instead of one of the 4-byte offsets could not be detected? Are FEC codes or CRCs not an option?