There are some possibilities of what can go wrong:
- Character encoding. Others have commented on that, so I'll keep it brief: do you know exactly in which character encodings your two text files are? if so, is that source of information reliable? You can't work with text if you don't know how it's encoded, so you need to know for sure.
- Data format: Are you sure that all files have the same line endings? and that they don't contain any non-printable characters that make your comparisons go awry? For Arabic text it might well be that it contains bidi-markers that your text editor doesn't show, but Perl might become confused by them
- Normalization: If I'm informed correctly, the Arabic script makes heavy use of diacritic marks. That means that many characters have multiple representations (pre-composed and separate base character/diacritics). In that case Unicode::Normalize can help you
- Cultural misunderstanding: If you don't know the script and the language, it might be that things that look identical to you actually vary in small details, and the words you want to remove aren't actually part of your input data.