You could use PDL, an efficient math library for Perl. It can do matrix math with ease.
| [reply] |

What, if anything, have you tried? Is there a Perl component to this question or did you mistake the Monastery for the Cloistered Towers of MathMonks?
and, please, don't double-post your questions.
| [reply] |

This sounds a lot like a homework problem, but I'll give you the following pointers:
- This is equivalent to finding the largest eigenvalue of D.
- A standard way of solving this problem is to use the "power method" which is documented here: wikipedia entry
- The perl module
`PDL` is useful for performing matrix operations.
| [reply] [d/l] |

If you set aside for a moment the constraint on the vector `x`, then the problem of maximizing
`x`_{i}D_{ij}x_{j}
(a sum over repeated indices is implied) translates, upon taking the derivative with respect to `x`_{i},
into the equation
` D`_{ij}x_{j} = 0.
This is a special form of the
eigenvalue equation for the matrix `D`, with the
eigenvalue being 0. In order for non-trivial solutions, the matrix `D` cannot have an inverse, so it's
determinant must vanish. Finding the eigenvectors can
be done with PDL; see
Mastering Algorithms with Perl for a discussion. The
Math::Cephes::Matrix module can also do this for real symmetric matrices. Generally, the components of the
eigenvectors found in this way are not completely determined; by convention, the normalization `x`_{i}x_{i} = 1 is imposed.
*Update:* The preceding needs to be augmented by a
test on the
Hessian of the matrix to determine what type of
critical point is present.
| [reply] |

In this case you have to take into account the constraint on the vector `x`. The quadratic form `x`_{i}D_{ij}x_{j} is unbounded on `R`^{n}, so a derivative test (even with an Hessian test) is only going to find local extreme points, and there is no guarantee that those points will lie on the surface of interest (i.e. `x'x = 1`.) The difference is that for a point to be a local maximum when `x` is constrained, the derivative only needs to be zero when evaluated in the tangent space of the constraining surface at that point, not zero in every direction.
The standard way to include the surface constraint is to use Lagrangian multipliers.
| [reply] |

I was thinking of the problem differently -
find the extremal points of
`x`^{†}Dx, which leads to the eigenvalue
equation `Dx = 0`. This equation doesn't determine
`x`^{†}x completely, so one is free to impose, for example, `x`^{†}x = 1 as a
normalization condition. But you're right that if
`x`^{†}x = 1 is intended as a true constraint, then a method like Lagrange multipliers should be used.
| [reply] |

How would you do it by hand / paper & pencil? Do you apply a certain algorithm? If so, that algorithm can then most probably be translated into a computer program.
CountZero *A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity."* - The Tao of Programming, 4.1 - Geoffrey James
| [reply] |

What you are trying to solve is a nonlinearly constrained nonlinear (actually, quadratic) mathematical program. This is a tough one, I can tell you...
However, all is not lost, because you might be able to simplify your problem into something actually soluble.
First: are you absolutely sure that you want to optimize over the boundary of the unit ball (this is what the constraint `x'x=1` seems to impose)? Isn't it enough to only assure that `x` is constrained into a sane region, like e.g., the n-dimensional unit box? If it is, you get a quadratic program:
`max_x (x' D x)
s.t. -1 <= x_i <= 1 \forall i=1..n
`
See more on quadratic programs here: Quadratic programming.
Second: all depends on whether or not your `D` is negative definite or not (all eigenvalues are negative). If not, you are out of luck: the problem is (at least) NP hard. If, on the other hand, `D` is nicely negative definite, then the objective function is concave and the uniqueness of the optimal solution is guaranteed. In this case, you could use a commercial quadratic solver, like CPLEX, or, after suitable linearization, a linear program solver. too. E.g., this one Math::GLPK has a nice Perl interface (believe me it's nice: I wrote it!...:-).
| [reply] [d/l] [select] |