Your skill will accomplish
what the force of many cannot
princepawn I'm still in no position to argue who is correct. I haven't used the DBIx::Recordset, so this reply will primarily be a critique of your article. I think that your position on the DBI is a bit hasty to point out detriments that may be simply ghosts in your machine.
I truly like this introduction. I really believed I was about to see something that is going to change my everyday work. The hook for me is that you are not chastising the DBI "shortcomings", but are chastising the users of the DBI for making their life difficult.
Problems Using DBI at Application-Level
Intolerance to Table and Data Mutation
My primary beef with this section is that you suggest that calls to the DBI with explicitly named fields will break upon table mutation. This is simply incorrect. All SQL in the recent versions of the popular DBMS (Oracle, MS SQL, Sybase, etc.) will handle the sample code and any other code that explicitly names its parameters. Is table mutation a gotcha? You bet it is. But, to my knowledge, it will only be apparent when shortcuts involving explicit or implicit * is used, as in SELECT * or INSERT INTO table VALUES (column1, column2, column3) In short, I think this is the first shortcoming in your article that makes me pause. As a programmer, I know that it is tempting to use the shortcuts when developing with the DBI. This is false laziness. But by making my SQL as explicit as possible first, I am practicing true laziness because I will not have to modify that code when tables mutate.
Error-Prone and Tedious Query Construction
My primary DBI use is with MS SQL, so my primary DBD is DBD::Sybase. It doesn't support placeholders, so I will concentrate on the literal notation. First, though, I would like to ask what difference the placeholder versus the literal notation would cause in a case where the SQL is written to be explicit. I personally can't see the advantage of one over the other, but this is also because this avenue is not available to me. But to continue, an explicitly written SQL statement with descriptive variable names will indeed suffer for longer length SQL statements, unless the programmer employs tricks to help his eyes follow the code. But the problem phrase in this section is one that I have dealt with. Literal notation will require more lines of code... At worst, a good programmer will only require at most 3 more lines:
The embedding of the variables to be interpolated in the SQL statement will take as many lines as the preparation of the array being executed. I wrote a simple sub to control what is quoted that lets me pass two strings. The sub will then loop through each parameter and determine whether to quote based on the two strings I pass it. This took me three lines. Yes, this is the type of solution you are complaining about, but it will be as efficient as the code that DBIx::Recordset uses, and now in my scripts I have one line of prep time on quoted strings. As far as I can tell, my prep time is going to be equivalent to yours, because you didn't show the prep time involved in making the hash passed to Insert in the Recordset. Now, you may have a ghost in your machine that does this for you, but my code still wants me to explicitly prepare my hashes before I try to use them.
Manual and Complex Mappingof Database Data to Perl Data Structures
I think this is where chromatic chimed in, or at least where his comments got to you the most. I may also be wrong, but as I have no information as to the whole text of the reply, I'm going to leave it at that, and go on with my own critique. Why are there multiple return methods for DBI? Why is fetchrow_hashref so maligned? Who is Jon Galt?
Seriously, there does seem to be an inordinate number of methods devoted to simply retrieving the data from an executed statement. But consider for a moment that this was probably not done in a fit of hubris of Tim. The key here is flexibility. When I write
SELECT * FROM table
the only safe way of retrieving my data without failing your beloved table mutation problem is fetchrow_hashref. But let's consider a more explicit SQL statement, and a reason why I want to use other methods. Single row methods are handy. They suggest that you are only interested in the first row and generally this is because, by design, you only expect a row. Is there more than a single row there? Who cares? This gets into a whole other meditation. Nonetheless, my point is that the multiplicity of return methods is designed to provide the right chunk at the right time. I'm confused how the one line of DBIx::Recordset is able to return multiple rows to a single tied hash keyed on column names. In my explicit SQL and a quick while ($t = $sth->fetchow_arrayref) I'm guaranteed an order and can load the data into my own data structure, but the selection of the return method is heavily decided upon by the data structure I wish to use (which may not be a hash). The multiple methods are about freedom. Why is a hash always the best data structure?
Stay tuned for any more comments. I think I've pointed out more simpler problems at the heart of the article than those alluded to in your reprint of specific chromatic ponderings. Primarily, while you begin the article by stating that users aren't making informed choices about their API, but instead of purely showing the benefits of Recordset, it degrades into badly interpreted problems with the SQL statements you wish to demonstrate the "shortcomings" you think should entice the user to writing applications through an even further abstracted API. I agree, a higher level API could be useful, and in fact quicker to use, but the drawback is lost freedom. It may be entirely appropriate in some applications, or even in most of all the applications you are targeting as offenders, but that freedom will be necessary somewhere, and so total abstraction is not a solution I would willingly accept without having some returned benefits beyond the scope of quicker development time.
ALL HAIL BRAK!!!
In reply to Re: DBI vs DBIx::Recordset (or Ghosts in the Machine?)