Beefy Boxes and Bandwidth Generously Provided by pair Networks
Problems? Is your data what you think it is?
 
PerlMonks  

Re: Perl memory Memory consumption

by jeroenes (Priest)
on Nov 22, 2001 at 20:06 UTC ( #126979=note: print w/ replies, xml ) Need Help??


in reply to Perl memory Memory consumption

First of all, without specifics it's hard to give you any but general advice. With specifics I mean, how does your perlcode look like?

Perl is a memory eater. But in my experience you only get to these huge amounts of memory footprint if you have large data structures holding small items. Eg, a million array items of one byte will take 43 Mb of memory. Array but more so hash overhead is large.

To me it sounds like you are trying to rebuild your PostgreSQL database in perl memory. That would be a bad idea. If this is the case and if you did so because Postgres communication is too slow (I doubt that with 10/s) move to a faster system, like BerkeleyDB (www.sleepycat.com).

About the malloc system, perl normally comes with its own. If you want to know more about it, read perlguts.

HTH,

Jeroen
"We are not alone"(FZ)


Comment on Re: Perl memory Memory consumption
Re: Re: Perl memory Memory consumption
by alfatux (Novice) on Nov 23, 2001 at 15:57 UTC
    Hi,

    Thanks for your observations, but due to a NDA in my project I can't publish any code. But looking the memory consumption I can see that it grows about 1 MB/sec. I think the problem is in this part :
    	my $config = Config->new(config_file => "config.xml", debug=>0);
    
    
    
    
    
    	$bd =  db_out->new ($config->{db_data});
    
    
    
    
    
    	while (1) { 
    
    
    
    
    
      		$records = $bd->stack_out_get(0);
    
    
    
    
    
      		if(defined($records)) {
    
    
    
    
    
        			while (($key,$msg)= each %$msg ){	
    
    
    
    
    
          			process_msg($bd,$msg);
    
    
    
    
    
        			}
    
    
    
    
    
      		} 
    
    
    
    
    
    	}
    
    
    
    
    
    
      Sorry, stack_out_get returns a hash with all the resultset of de query, 10 records each like
      id|TXT VARCHAR(2048)|timestamp_a|timestamp_b|app_Code VARCHAR(3)|number (integer)

      TXT lenght it's alway about 250 char, but in a future version could be greater.

      Does the program grow if you leave out that process_msg subroutine call?

      Also, use strict and my variables where possible, and don't make circular references like $a = \$b; $b = \$a;, or they'll never really get out of scope (they might go out of scope and be unreachable, but since they refer to eachother, they'll never go away)

        Hi,

        No, with process_msg the program not gows.

        I've investigating more an I detect that the problem could be in the get_msg funtion that looks like:
        
        
        sub get_msg {
        
        
          my $self = shift;
        
        
          my ($param) = @_;
        
        
          my $consulta_stack = qq{
        
        
                                 SELECT *
        
        
                                 FROM out
        
        
                                 WHERE app = $param
        
        
                                 ORDER BY id DESC LIMIT 100;
        
        
                                };
        
        
          my $sth = $self->{dbh}->prepare($consulta_stack);
        
        
          $sth->execute();
        
        
          my $msgs = $sth->fetchall_hashref('id');
        
        
          $sth->finish;
        
        
          return $msgs;
        
        
        }
        
        
        
        I also observ that when there's no record in the resultset memory consumption is highter than when there's sontehing.
      First of all let me say that I dearly hope that the conditional in this while loop has a typographical error:
      while(($key, $msg) = each %$msg) { process_msg($bs,$msg); }
      I hope you mean:
      while(($key, $msg) = each %$record)
      If you don't mean this, then this might be some of your problem. Try renaming your hash value.

      You can also make this much easier to read by using a foreach loop (if you only want to handle the msg):

      foreach my $msg (values %$records) { process_msg($bd, $msg); }

      Anyway, I wanted to suggest that you look at Perl's profiling package. It can't exactly help you find out where all your memory is going, but it might help you track down where it's being lost. (Note that you should definately be using "my" for each of your variables to ensure that they don't hang around out of scope.)

      You can read about Perl's profiling package with perldoc Devel::DProf

      The shorts of it are to profile a script run the perl interpreter with the -d switch. So perl -d:DProf test.pl. when it's done, check the results by running dprofpp or dprofpp -T.

      I hope that helps. :)

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://126979]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others romping around the Monastery: (17)
As of 2014-08-28 15:58 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    The best computer themed movie is:











    Results (263 votes), past polls