Fetching hashes is (much) slower than fetching arrays, but even faster than that is using bind_columns () so that the variables that will contain the data are nor re-allocated on every fetch. FWIW, the DBI manual comes with an example for that.
Then in comparing, you will need to ask yourself how likely it is that records (in the sense of a set of fields or columns) will not match and where the mismatch is most likely to occur. If e.g. the mismatch is always somewhere in the first three fields of your 60 columns, have a look at List::Util's first and stop matching after the first mismatch preventing all the other fields to compare. If the mismatch is expected in the last fields, you'd have to bake your own optimization, e.g. by creating an aliased list with all the fields reversed (and be able to use first again), you would not have to call the reverse on every record or have to use loops.
Enjoy, Have FUN! H.Merijn