note
randyk
<p>
If you set aside for a moment the constraint on the vector <tt>x</tt>, then the problem of maximizing
<tt>x<sub>i</sub>D<sub>ij</sub>x<sub>j</sub></tt>
(a sum over repeated indices is implied) translates, upon taking the derivative with respect to <tt>x<sub>i</sub></tt>,
into the equation
<tt> D<sub>ij</sub>x<sub>j</sub> = 0</tt>.
This is a special form of the
<a href="http://en.wikipedia.org/wiki/Eigenvalue">
eigenvalue</a> equation for the matrix <tt>D</tt>, with the
eigenvalue being 0. In order for non-trivial solutions, the matrix <tt>D</tt> cannot have an inverse, so it's
<a href="http://en.wikipedia.org/wiki/Determinant">
determinant</a> must vanish. Finding the eigenvectors can
be done with <a
href="http://pdl.perl.org/">PDL</a>; see <a
href="http://www.oreilly.com/catalog/maperl/">
Mastering Algorithms with Perl</a> for a discussion. The <a
href="http://search.cpan.org/~RKOBES/Math-Cephes/lib/Math/Cephes/Matrix.pm">
Math::Cephes::Matrix</a> module can also do this for real symmetric matrices. Generally, the components of the
eigenvectors found in this way are not completely determined; by convention, the normalization <tt>x<sub>i</sub>x<sub>i</sub> = 1</tt> is imposed.
</p><p>
<em>Update:</em> The preceding needs to be augmented by a
test on the <a
href="http://en.wikipedia.org/wiki/Hessian_matrix">
Hessian</a> of the matrix to determine what type of
critical point is present.
</p>
657293
657293