In case anyone else has experienced this problem as well,
here's an update of what I've learned since posting the
question.
I spoke with H. Merijn Brand (author of DBD::Unify) at
the July Amsterdam.pm meeting to see if he had any
insights. According to him (and if this is wrong I
probably am remembering it incorrectly), large object
handling probably isn't implemented in the Postgres DBD
because DBI currently lacks the specifications for doing
so, which is the reason he did not implement large objects
in his own driver. According to the
DBI
FAQ section 5.2, the answer to 'How do I handle BLOB
data with DBI' is:
If the BLOB can fit into your memory then setting the LongReadLen attribute
to a large enough value should be sufficient.
If not, ... To be written.
Postgres of course doesn't implement LongReadLen because
of how it handles large objects. According to the DBI
book ('Programming the Perl DBI' by Alligator Descartes
and Tim Bunce, O'Reilly, ISBN 1-56592-699-4), the only
method implemented by DBD::Pg is blob_read(), which is
'undocumented'
(DBD::Pg has the following to say about it:
'Supported by this driver as proposed by DBI.
Implemented by DBI but not documented, so this method
might change.').
Based upon all this, I decided to use DBD::Pg's driver
specific functions (lo_creat, lo_open, lo_import, etc)
until something else is available.