next up previous contents
Next: Maximum and Minimum Up: Unconstrained Optimization: Functions of Previous: Unconstrained Optimization: Functions of

Gradient

Given a function f of n variables tex2html_wrap_inline6630 , we define the partial derivative relative to variable tex2html_wrap_inline6632 , written as tex2html_wrap_inline6634 , to be the derivative of f with respect to tex2html_wrap_inline6632 treating all variables except tex2html_wrap_inline6632 as constant.

  example1434

The answer is: tex2html_wrap_inline6644 .

Let x denote the vector tex2html_wrap_inline6648 . With this notation, tex2html_wrap_inline6650 , tex2html_wrap_inline6652 , etc. The gradient of f at x, written tex2html_wrap_inline6658 , is the vector tex2html_wrap_inline6660 . The gradient vector tex2html_wrap_inline6658 gives the direction of steepest ascent of the function f at point x. The gradient acts like the derivative in that small changes around a given point tex2html_wrap_inline6668 can be estimated using the gradient.

displaymath6670

where tex2html_wrap_inline6672 denotes the vector of changes.

example1455

In this case, tex2html_wrap_inline6680 and tex2html_wrap_inline6682 . Since tex2html_wrap_inline6684 and tex2html_wrap_inline6686 , we get

displaymath6688

.

So tex2html_wrap_inline6690 .

  example1467

exercise1478

  exercise1482

Hessian matrix

Second partials tex2html_wrap_inline6730 are obtained from f(x) by taking the derivative relative to tex2html_wrap_inline6632 (this yields the first partial tex2html_wrap_inline6736 ) and then by taking the derivative of tex2html_wrap_inline6736 relative to tex2html_wrap_inline6740 . So we can compute tex2html_wrap_inline6742 , tex2html_wrap_inline6744 and so on. These values are arranged into the Hessian matrix

displaymath6746

The Hessian matrix is a symmetric matrix, that is tex2html_wrap_inline6748 .

Example 3.1.1 (continued): Find the Hessian matrix of tex2html_wrap_inline6750

The answer is tex2html_wrap_inline6752


Michael A. Trick
Mon Aug 24 16:30:59 EDT 1998