Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl-Sensitive Sunglasses
 
PerlMonks  

Re^3: Fetchrow question

by Anonymous Monk
on Mar 21, 2012 at 13:43 UTC ( #960773=note: print w/replies, xml ) Need Help??


in reply to Re^2: Fetchrow question
in thread Fetchrow question

While I cannot comment on either ODBC or ADODB, I heartily recommend the Advanced Perl DBI talk, which discusses squeezing the most performance out on the Perl side, and a lot more.

(Hint: fetchrow_array() copies the values practically for naught. You are better off using fetchrow_arrayref(), especially with the magic added by bind_columns() -- see slides 22-23)

Replies are listed 'Best First'.
Re^4: Fetchrow question
by alainb (Initiate) on Mar 21, 2012 at 15:36 UTC
    Hi yes the 100000 read was only to do the test and to be able to evaluate the difference in time from one method to another.

    The real thing dosent read so much record but its so creazy that we have to do a lot of work.

    First the is a keyid who is keep in a table
    There also is a table for the header and another table for the details.

    To add an header I have to read the keyid table, get the keyid then update the table with keyid +1.

    Then I can add the header with the new keyid.

    To add a detail I have to do the same:
    read the keyid table, get the keyid then update the table with keyid +1
    then add the detail with the new keyid.

    I know this probably look weird but its a package that we cannot modyfy so we have to work has they define their database.

    So Now to gain performance since I know Ho many details i have to add, instead of getting the keyid and updating it by 1 each times I update it once by number of detail line + 1 (header).

    Ie : if I have 10 details line + 1 header
    I Read the keyId and add 12 to it then update the table.

    I add the header with keyid-11
    I add the detail line1 with keyid-10
    ...
    I add the detail line10 with keyid

      alainb:

      It looks like you should be able to reduce the number of database calls. The method will likely differ based on the database, but you should be able to reduce the number of operations for each set, possibly to a single transaction. Examples:

      MySQL

      MySQL allows you to insert multiple records with a single statement, like so:

      insert into FooBar (ID, col1, col2, col3) values ((select max(ID)+1 from FooBar), 'a', 1, 'flort'), ((select max(ID)+2 from FooBar), 'b', 2, 'zap'), ((select max(ID)+3 from FooBar), 'c', 3, 'kazoo')

      This would let you use a single communication for each table type.

      MS SQL Server / Sybase / Oracle / etc.

      insert into FooBar (ID, col1, col2, col3) select ID, col1, col2, col3 from ( select max(ID)+1 from FooBar) ID, 'a' col1, 1 col2, 'flort' col3 union select max(ID)+1 from FooBar), 'b', 2, 'zap' union select max(ID)+1 from FooBar), 'c', 3, 'kazoo' )

      Oracle

      Some database drivers, such as Oracle, let you send a block of SQL to execute, so you can build blocks and do all your inserts with one swell foop:

      declare v_ID number; begin -- First table select max(ID) into v_ID from FooBar; insert into FooBar (ID, col1, col2, col3) values (v_ID+1, 'a', 1, 'flort'); insert into FooBar (ID, col1, col2, col3) values (v_ID+2, 'b', 2, 'zap'); insert into FooBar (ID, col1, col2, col3) values (v_ID+3, 'c', 3, 'kazoo'); -- Second table select max(ID) into v_ID from BarBaz; insert into BarBaz (ID, col1) values (v_ID+1, 'etcetera'); end;

      Using a work table

      Another option, that I frequently find useful, is to create some temporary work tables and bulk load the data into them. The bulk loader should be about the fastest way to get data into the database. Then create a stored procedure that integrates the data in the work tables into the real tables. For adding large sets of data to a database, this is pretty hard to beat.

      ...roboticus

      When your only tool is a hammer, all problems look like your thumb.

        In addition to the excellent advice from roboticus, both DBD::Oracle and now DBD::ODBC (from 1.34) support DBI's execute_for_fetch. You might be wondered why this helps (given the name) with multiple inserts but it is because DBI's execute_array uses execute_for_fetch. With execute_array you can prepare an insert and provide as many rows of data for the parameters as you like and the DBD will send them in batches (thus vastly reducing network traffic). It is usually a lot faster.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://960773]
help
Chatterbox?
and the rats come out to play...

How do I use this? | Other CB clients
Other Users?
Others contemplating the Monastery: (4)
As of 2018-01-23 02:35 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    How did you see in the new year?










    Results (238 votes). Check out past polls.

    Notices?