http://www.perlmonks.org?node_id=1019462


in reply to Comparing DBI records

Fetching hashes is (much) slower than fetching arrays, but even faster than that is using bind_columns () so that the variables that will contain the data are nor re-allocated on every fetch. FWIW, the DBI manual comes with an example for that.

Then in comparing, you will need to ask yourself how likely it is that records (in the sense of a set of fields or columns) will not match and where the mismatch is most likely to occur. If e.g. the mismatch is always somewhere in the first three fields of your 60 columns, have a look at List::Util's first and stop matching after the first mismatch preventing all the other fields to compare. If the mismatch is expected in the last fields, you'd have to bake your own optimization, e.g. by creating an aliased list with all the fields reversed (and be able to use first again), you would not have to call the reverse on every record or have to use loops.


Enjoy, Have FUN! H.Merijn

Replies are listed 'Best First'.
Re^2: Comparing DBI records
by parser (Acolyte) on Feb 19, 2013 at 16:54 UTC
    Great advice Tux. Thank you. Unfortunately, any field in the entire record could change with the exception of the primary index. Worse, many of the fields could change.

    I AM intrigued by the List::Utils module and will have a go at using it as a separate exercise.