note
t'mo
<p>Yes, fitness is related to the environment. But...</p>
<p>In this case, if the algorighm becomes simpler if FITNESS is more tightly associated with the INDIVIDUAL. For example, this:</p>
<code>
sub choose {
my $self = shift;
my $f = rand(1.0);
my $index = 0;
my $sum = 0.0;
foreach my $fitnes (@{$self->{FITNESSES}}) {
$sum += $fitnes;
return ${$self->{INDIVIDUALS}}[$index] if $sum >= $f;
++$index;
}
die "can't select an individual";
}
</code>
<p>becomes:</p>
<code>
sub choose {
my $self = shift;
my $f = rand(1.0);
my $sum = 0.0;
foreach my $individual (@{$self->{INDIVIDUALS}}) {
$sum += $individual->{FITNESS};
return $individual if $sum >= $f;
}
die "can't select an individual";
}
</code>
<p>which I consider an improvement, not only due to the fact that there's (a little) less code, but the concept of choosing <em>an individual</em> and not <em>a fitness</em> is emphasized.</p>
<p>Finally, I must admit that I have a bias in relation to the idea you presented. Yes, environment does determine fitness. However, what if you're trying to evolved <em>generalized</em> behavior, i.e., a program that will perform well in <em>any</em> environment, and not simply the one's it was trained in? The little bit of work I've done with GP has been focused in the direction of trying to avoid such "over-training" or specialization.</p>
31147
82949