The reason I prefer the DBI/DBD route is that you use the same interface for *any* database. Here at $work, I work with Oracle, MSSQL, SQLite and occasionally PostgreSQL. When I have to interact with a particular database, I don't have to ask myself questions like:
OK, how do I read a result set with *this* database?
Can I use placeholders in my query? If so, how do I do that?
What data structure get I get my results in?
Since DBI provides a standardized interface, I can be immediately productive when I switch back to a database I use rarely, without having to reacquaint myself with a module I haven't used in a year.
Sure, there are some occasional differences between the databases, but DBI / DBD lets me ignore most of them. Occasionally, I'll need a special database-specific feature and have to read DBD::Oracle or some such. But better that than having to read documentation on all the everyday operations for selecting, inserting, updating and deleting.
Same as roboticus, really - the DBI syntax is a little simpler, and the API is (mostly) standardized, so your scripts can be easily maintained by someone else who doesn't necessarily understand the CTlib API.
And I'm more likely to fix things in DBD::Sybase these days...
Thanks Michael. My team is working on migrating our scripts from DBlib (to either CTlib or DBD::Sybase). However, we are still discussing the pros and cons of CTlib vs DBD::Sybase. One of the things we noticed was the bcp in CTlib is much much faster as compared to the "Experimental Utility" in DBD::Sybase. Can you please help to clarify whether the "Experimental Bulk Load" utility in DBD::Sybase is doing inserts underlyingly and therefore slower?