As I understand, the perl process holds onto any heap memory it gets allocated (I could be wrong), so yes, in your case it's going to always have the memory footprint of the large use case. There are couple approaches that might help ameliorate this for you:
Can you modify your file parsing so it's streaming instead of slurping? Just because you need to process 90 MB doesn't necessarily mean you need to hold onto 90 MB of data.
Can you combine the above with a database? For example, by using an SQLite database, you should be able to avoid a large memory footprint for perl while still maintaining access to the data. You could swap that to an in-memory database if file access times become prohibitive, but I'm unclear as to whether that would create a permanent memory footprint.
Finally, you could have a parent process that forks, and the children parse your files. That way, when the child is reaped, the memory is recovered.
See also http://stackoverflow.com/questions/9733146/tips-for-keeping-perl-memory-usage-low.
#11929 First ask yourself `How would I do this without a computer?' Then have the computer do it the same way.