a) Is there a better way to do this?
b) Can the memory footprint be reduced any?
Two very different questions. To answer the second question first, an easy way to reduce the memory footprint is for each query, read in the file line by line, and report if it overlaps. I bet you are now saying "but that's too slow". With many problems, there's a trade-off between memory usage, and processing time. Reduce the memory usage, and the processing time goes up. Just asking for "reduce the memory usage" without saying anything about processing time may not get the answer you are looking for.
As for you first question, it depends. What is "better" in your opinion? *My* opinion of better is to reduce query time, and invest in memory and preprocessing time - build a segment or interval tree and do queries against that. But that will increase your memory usage, so it's probably not better for you.