Oh, I probably should have mentioned that the files are later being parsed with XML::LibXML, using an hybrid pull parser - dom tree strategy.
More Concretely, with XML::LibXML::Reader I implemented a pull parser and then, for every node (these type of nodes are dramatically littler than the whole XML dom tree) I load it into memory and get the data Im interested on with XML::LibXML::XPathContext.
I apologize if omitting this turned out to be misleading.
But, fact is the part of the code in charge of the parsing does work well and according to what I expect in terms of mem usage.
Now, the part that indeed doesnt work as expect is the concrete piece of code of the original post (which I isolated into this single script, for testing purposes)
The only omitted code there is an array containing the paths to the dl'd files which is being returned by the function and, also, a few more urls @ the urls array.