next up previous contents
Next: Maximum and Minimum Up: Unconstrained Optimization: Functions of Previous: Unconstrained Optimization: Functions of

Gradient

Given a function f of n variables tex2html_wrap_inline311 , we define the partial derivative relative to variable tex2html_wrap_inline313 , written as tex2html_wrap_inline315 , to be the derivative of f with respect to tex2html_wrap_inline313 treating all variables except tex2html_wrap_inline313 as constant.

  example37

The answer is: tex2html_wrap_inline325 .

Let x denote the vector tex2html_wrap_inline329 . With this notation, tex2html_wrap_inline331 , tex2html_wrap_inline333 , etc. The gradient of f at x, written tex2html_wrap_inline339 , is the vector tex2html_wrap_inline341 . The gradient vector tex2html_wrap_inline339 gives the direction of steepest ascent of the function f at point x. The gradient acts like the derivative in that small changes around a given point tex2html_wrap_inline349 can be estimated using the gradient.

displaymath351

where tex2html_wrap_inline353 denotes the vector of changes.

example58

In this case, tex2html_wrap_inline361 and tex2html_wrap_inline363 . Since tex2html_wrap_inline365 and tex2html_wrap_inline367 , we get

displaymath369

.

So tex2html_wrap_inline371 .

  example70

exercise81

  exercise85

Hessian matrix

Second partials tex2html_wrap_inline411 are obtained from f(x) by taking the derivative relative to tex2html_wrap_inline313 (this yields the first partial tex2html_wrap_inline417 ) and then by taking the derivative of tex2html_wrap_inline417 relative to tex2html_wrap_inline421 . So we can compute tex2html_wrap_inline423 , tex2html_wrap_inline425 and so on. These values are arranged into the Hessian matrix

displaymath427

The Hessian matrix is a symmetric matrix, that is tex2html_wrap_inline429 .

Example 1.1.1 (continued): Find the Hessian matrix of tex2html_wrap_inline431

The answer is tex2html_wrap_inline433


Michael A. Trick
Mon Aug 24 14:09:40 EDT 1998