Well, the problem is apparently where you have no characters above 127, because as long as all characters are ordinal less than 128, their encoding doesn't change either in number or in length between utf8 and any other non-ebcdic encoding.
I've "lost" the original file because the updated file is checked in, so I recreated it as *.iso, and I get:
$ perl -nE'ord()>127 and print for split"",$_' *iso | wc -c
So, 32 characters with accents or what have you putting its ordinal above 127.
$ ls -l messages.js.*
-rw-r--r-- 1 tanktalus tanktalus 2490 Dec 20 20:57 messages.js.iso
-rw-r--r-- 1 tanktalus tanktalus 2522 Dec 20 20:58 messages.js.utf8
And because they're above 127, when converting to utf8, they'll expand to multiple bytes. In this case, all 32 bytes expand to precisely two bytes (though I think some characters in other languages can be three or four bytes each):
$ perl -nE'ord()>127 and print for split"",$_' *utf8 | wc -c
If we were in the situation you gave, there would have been no issue. For all pure-English text (not counting things from other languages, such as "Hawaï" or "déjà vu"), ISO-8859* and UTF8 are bit-for-bit identical. It's only the characters that have separate binary representations between ISO8859* (in this case -1, as that's the encoding most commonly used for French prior to UTF8 taking over) and UTF-8 that caused a problem that had to be resolved, and I used perl to do so.