Speaking to this requirement in a more general way, many fundamental data-processing procedures (dating all the way back to Herman Hollerith’s punched cards, and to spinning magnetic tapes since then), rely on the notion of processing sorted input-streams.
If you know that a particular input-stream is sorted in some particular way, then you necessarily know that all records having any particular key-value must be consecutive. Therefore, you can simply remember the values that you saw in the one and only-one ‘preceding record,’ and compare it to the record that you have now. If the values of the key field(s) are not the same, then you know that you have just encountered the last occurrence of the previous key, and the first occurrence of the new one. (“End-of-File” is simply a special-case of this.)
Even today, many decades after the days of punched cards and magnetic tapes, the efficiencies of “sorted files” remain relevant. (After all, Dr. Knuth entitled one of his books, Sorting And Searching.) The one-time cost of sorting an input file ... which is likely to be much less “costly” than you might suppose ... can sometimes produce dramatic – (as in, “orders of magnitude faster”) – improvements in down-stream processing stages that are able to exploit (and, require) this fact.