Beefy Boxes and Bandwidth Generously Provided by pair Networks
more useful options
 
PerlMonks  

Re: How to optimized mysql insert into table

by Mjpaddy (Acolyte)
on Jan 17, 2017 at 15:01 UTC ( [id://1179759]=note: print w/replies, xml ) Need Help??


in reply to How to optimized mysql insert into table

I edited my code I forgot to check it. This scenario is just one part of my script

I want suggestion like if we use multithreading how I going to apply in this case to reduce execution time. thats the main goal for me.

really appreciate any help.

  • Comment on Re: How to optimized mysql insert into table

Replies are listed 'Best First'.
Re^2: How to optimized mysql insert into table
by Corion (Patriarch) on Jan 17, 2017 at 15:06 UTC

    Where in your program do you loop over your input file?

    You are talking about 100,000 inserts but your program is only ever doing one insert.

    Maybe you can show us a real program that does more than one insert.

    My suggestion for doing more than one insert is to use a loop and to prepare the statement handles outside of the loop and to do the ->commit calls every 1000 inserts or something like that.

    Also, consider using the native data loading functionality of MySQL instead of hand-feeding the data to your database.

      OK I put that part back in:
      use strict; use warnings; use DBI; my $start = time(); my $data; my ($version,$jobnumber); my $file = "C:/temp/dump.txt"; open (FH, "$file") or die "$!"; while(<FH>){ $data .= $_; } $version = substr($data,0,3); $jobnumber = substr($data,4,5); my $databaseName = "basedb"; my $connectHost = "127.0.0.1"; my $DSN = "DBI:mysql:dbname=$databaseName;host=$conn +ectHost"; my $db_username = 'root'; my $db_password = 'root'; my $dbh = DBI->connect($DSN,$db_username,$db_passwor +d,{RaiseError=>1,PrintWarn=>0,AutoCommit=>0}) || die "error connectin +g: $!"; my $dump_sql = "INSERT INTO temp (version,jobnumber) VALUES(?,?);"; my $dump_sth = $dbh->prepare($dump_sql) or die "$DBI::errstr"; eval{ $dump_sth->execute($version,$jobnumber) or die "$DBI::errstr"; $dump_sth->execute($version,$jobnumber) or die "$DBI::errstr"; $dump_sth->execute($version,$jobnumber) or die "$DBI::errstr"; $dump_sth->execute($version,$jobnumber) or die "$DBI::errstr"; $dump_sth->execute($version,$jobnumber) or die "$DBI::errstr"; $dump_sth->execute($version,$jobnumber) or die "$DBI::errstr"; $dump_sth->execute($version,$jobnumber) or die "$DBI::errstr"; $dump_sth->execute($version,$jobnumber) or die "$DBI::errstr"; $dump_sth->execute($version,$jobnumber) or die "$DBI::errstr"; $dump_sth->execute($version,$jobnumber) or die "$DBI::errstr"; ... }

        Have you heard that Perl has loops?

        Also, you never change $version and $jobnumber.

        Maybe you want to reconsider using one of the approaches I've already outlined.

Re^2: How to optimized mysql insert into table
by Marshall (Canon) on Jan 17, 2017 at 18:20 UTC
    Please pay attention to what huck and Corion have said. I say similar advice at Re^4: Perl DBI to MySQL LOAD_FILE. Inserting even all 170,568 records as a single transaction is no big deal - that is just fine. A commit operation is expensive, very expensive. When done properly simple single threaded code will take maybe a minute vs 4 hours.

    Update: Performance Benchmark

    To demonstrate my point that running a single transaction for all of the inserts will make a HUGE difference, I offer the following benchmark code that makes a test DB table with 170,568 records.

    I used SQLite so that this can be run by anybody. If anything MySQL will be faster. I am using an ancient WinXP laptop.

    Results:
    Doing a commit after every insert = 4.9 hours! Doing a Single commit after all 170,568 inserts = 3 seconds!!! That is what I call a HUGE difference!

    Since my laptop takes about 5 hours the OP's reported time of 4 hours sounds plausible. My guess is that a table creation like I show below will reduce the OP's execution time from 4 hours to 1-2 seconds. Nothing really fancy required, just some solid code.

    #!/usr/bin/perl use strict; use warnings; use Data::Dumper; use DBI qw(:sql_types); my $dbfile = "./TestingInsert.sqlite"; if (-e $dbfile) {unlink $dbfile or die "Delete of $dbfile failed! $!\n +";} my $start_seconds = time(); my $dbh = DBI->connect("dbi:SQLite:dbname=$dbfile","","",{RaiseError = +> 1}) or die "Couldn't connect to database: " . DBI->errstr; $dbh->do ("CREATE TABLE test ( id integer PRIMARY KEY AUTOINCREMENT, version integer, jobnumber integer ); "); $dbh->do("BEGIN"); #running one transaction is needed for speed!!! import_data(); $dbh->do("COMMIT"); my $end_time = time(); print "Total elasped time: ", $end_time-$start_seconds, "\n"; sub import_data { my $jobnumber = 1; my $add = $dbh->prepare("INSERT INTO test ( version, jobnumber ) VALUES(?,?)"); while ($jobnumber < 170568) { $add->execute($jobnumber, $jobnumber++); #DBI insert } return; } __END__ Using a comitt for each insert: Total elasped time: 17833 => 297 min => 4.9 hours Using a single transaction for 170,568 inserts: Total elasped time: 3 seconds
Re^2: How to optimized mysql insert into table
by poj (Abbot) on Jan 17, 2017 at 18:26 UTC

    Do you want the load to be all or nothing. i.e. if one records fails then rollback all the previous inserts ?

    poj
      Our requirement is that but data be there, but we do not concern what that data is only if there.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1179759]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others goofing around in the Monastery: (4)
As of 2024-04-24 03:40 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found