Beefy Boxes and Bandwidth Generously Provided by pair Networks
Don't ask to ask, just ask
 
PerlMonks  

DBD::CSV eats memory

by perlmonkdr (Beadle)
on Nov 16, 2007 at 02:00 UTC ( #651110=perlquestion: print w/ replies, xml ) Need Help??
perlmonkdr has asked for the wisdom of the Perl Monks concerning the following question:

Hello guys,

I'm working with DBD::CSV module and can't find the problem that eats a lot of memory, check this example:

#!/usr/bin/perl use DBI; our $st = DBI->connect('DBI:CSV:f_dir=.',{RaiseError=> 1}); my $check = $st->prepare(q{ SELECT COUNT(*) FROM list WHERE id=? LIMIT 1 }); my $a = 0; while ($a++ < 1000000) { $check->execute($a.('z' x 10)); print $a.('z' x 10)."\n"; } $check->finish(); $st->disconnect();

To make the list table use this:

#!/usr/bin/perl open F, '>','list'; print F "id\n"; for (0..1000000) { print F $_.('z' x 10)."\n"; } close F;

I hope that your computer is sufficiently slow to take several seconds to process this, if not, increase to 10 million or billon lines, even if you can't see anything, please gift me your pc ;-)...

somebody him to happened something similar?

What's can i do to decrease the use or better the hunger of memory and the cpu too, becouse it's really really slow.

Comment on DBD::CSV eats memory
Select or Download Code
Re: DBD::CSV eats memory
by jZed (Prior) on Nov 16, 2007 at 02:17 UTC
    I'm not sure what your test is supposed to illustrate or what you think memory has to do with speed in this case, but as the maintainer of DBD::CSV I can assure you that it was not built to handle a million rows quickly. Use SQLite or a full RDBMS for data sets of that size. That said, there are some big speed improvements coming to DBD::CSV soon (a new version of its SQL engine SQL::Statement). I'll announce it on this site when its ready.

      Ok, thank, in this example I was tried to show how DBD::CSV eats memory in each loop, I was used a millon of rows becouse this's the way to see the increase of memory.

      Anyway, I'll try then with SQLite or DBM::Deep.

      Thank again for your help

        You wrote "DBD::CSV" eats memory in each loop". How do you know that? Your script doesn't do any memory testting or profiling or printouts. And if it's true, what OS are you on and what versions of the software and its prerequisites are you using? (your version of DBI, for example is more than 5 years old).
Re: DBD::CSV eats memory
by dragonchild (Archbishop) on Nov 16, 2007 at 03:57 UTC
    A third option would be DBM::Deep which is designed to handle a million rows. There's no SQL interface to it ... you're more than welcome to build one. I'll even help. :-)

    My criteria for good software:
    1. Does it work?
    2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
      I had DBM::Deep working with my DBD::DBM at one point but I dropped the ball on it. I think it would be fairly trivial to make them work together. I don't have the tuits to do more than offer suggestions, but if you or someone else wants to patch DBD::DBM, I'd be glad to kibbutz and apply the patches.

        Double thank :-)

        If you want to go ahead and describe what needs done, I'd be willing to take a look at it.

        My criteria for good software:
        1. Does it work?
        2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?

      Right now, I'm need to use only a driver of DBI for compatibility reason.

      Thank a lot

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://651110]
Approved by Joost
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others browsing the Monastery: (7)
As of 2014-08-01 01:01 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    My favorite superfluous repetitious redundant duplicative phrase is:









    Results (256 votes), past polls