in reply to Perl beginner's issue with hash

Reading a file multiple times is usually a "code smell" (a bad coding practice). Very often, as in your case, it's because you want to compare all the items in a file against another group of items. In most cases that gives two options:

  1. Get the first group into some data structure (most often a hash or array) then process the second group (usually the lines in a file) one at a time and process them using the preloaded first group values
  2. The first option with group one and two swapped

Usually the expected size of the two groups determines which option to choose. Store the smaller of the two groups of data in memory. Commonly the data structure will be an array or a hash, although in your case a regular expression as suggested by Haukex is probably the best option.

The reason for avoiding rereading a file is that doing so is slow. Reading data from a file is likely to be thousands of times slower than "reading" the same data from memory. With modern computers reading even large files into memory is practical.

Optimising for fewest key strokes only makes sense transmitting to Pluto or beyond