The original problem stated the integers were held in files. It would probably be worthwhile for me to alter the benchmark to pull in integers from a file instead. But then I'm testing IO, not the algorithm.
Not really. You'd at least be testing the real-world scenario. And what you'd find is most likely that the IO so completely swamps everything else once you get into using a hash that further savings will be immaterial. If saving the 20 seconds that the hash costs is important, you could use IO::AIO to read in the data and insert it into the hash - if I read that module right, you should be able to be inserting and reading simultaneously, which now brings you solely to reading speed (whether local disk or over the network). I doubt it'll be important.
Basically, this is the definition of premature optimisation. You're micro-optimising something that will have no effect on the overall speed because you're ignoring the elephant in the algorithm: disk IO.
That and the payoff isn't that great even when you do make this optimisation. Dropping from the hash to the array is a huge savings for the cost involved. Dropping to XS is not as good because the development and debug time likely will completely dwarf the savings again :-) For the record, I expect using IO::AIO to be cheaper to implement than your XS version, give a negligibly better savings, and still not be worth it overall :-)