Thanks for all the helpful replies. Another little question that's bugging me (I'm not sure if anybody will see this down here or not): does my function presume endian-ness of one form or another? Or does that even matter? Whether big-endian or little-endian, the leftmost bit of the leftmost byte will be the sign bit, correct? Will the endian-ness of the integer affect how it is shifted? It doesn't appear to so far, since even on my little-endian Win32 system, I get the correct end result. And it doesn't seem to matter if I remove the ">" from my pack and unpack there at the end -- I get the correct result. As I said, my understanding of the binary system is still pretty new.