Added to Favorites

Related Searches

Definitions

Nearby Words

This tutorial presents an introduction to optimization problems that involve finding a maximum or a minimum value of an objective function $f(x\_1,x\_2,ldots,\; x\_n)$ subject to a constraint of the form $g(x\_1,x\_2,ldots,\; x\_n)=k$.

- $f(x,y)=2x^2+y^2$
- $f\_x(x,y)=4x=0$
- $f\_y(x,y)=2y=0$

$f(x,y)$ has one stationary point at (0,0).

- $H(f)=\; begin\{bmatrix\}$

- If $H(f)>0$ at point x then $f$ has a local minimum at x
- If $H(f)\; <\; 0$ at point x then $f$ has a local maximum at x
- If $H(f)$ has negative and positive eigenvalues then x is a saddle point
- Otherwise the test is inconclusive

In the above example.

- $H(f)=begin\{bmatrix\}$

Therefore $f(x,y)$ has a minimum at (0,0).

- $nabla\; f(x\_1,x\_2,cdots,\; x\_n)\; =\; lambda\; nabla\; g(x\_1,x\_2,cdots,\; x\_n)\; .$

The number $lambda$ in the equation is known as the Lagrange multiplier.

- $L(x\_1,x\_2,ldots,\; x\_n,lambda)=\; f(x\_1,x\_2,ldots,\; x\_n)+lambda\; (k-g(x\_1,x\_2,ldots,\; x\_n))$

Then finding the gradient and hessian as was done above will determine any optimum values of $L(x\_1,x\_2,ldots,\; x\_n,lambda)$.

Suppose we now want to find optimum values for $f(x,y)=2x^2+y^2$ subject to $x+y=1$ from [2].

Then the Lagrangian method will result in a non-constrained function.

- $L(x,y,lambda)=\; 2x^2+y^2+lambda\; (1-x-y)$

The gradient for this new function is

- $frac\{partial\; L\}\{partial\; x\}(x,y,lambda)=\; 4x+lambda\; (-1)=0$
- $frac\{partial\; L\}\{partial\; y\}(x,y,lambda)=\; 2y+lambda\; (-1)=0$
- $frac\{partial\; L\}\{partial\; lambda\}(x,y,lambda)=1-x-y=0$

Finding the stationary points of the above equations can be obtained from their matrix from.

- $begin\{bmatrix\}$

This results in $x=1/3,\; y=2/3,\; lambda=4/3$.

Next we can use the hessian as before to determine the type of this stationary point.

- $H(L)=$

Since $H(L)\; >0$ then the solution $(1/3,2/3,4/3)$ minimizes $f(x,y)=2x^2+y^2$ subject to $x+y=1$ with $f(x,y)=2/3$.

- [1] T.K. Moon and W.C. Stirling. Mathematical Methods and Algorithms for Signal Processing. Prentice Hall. 2000.

- [2]http://mat.gsia.cmu.edu/QUANT/NOTES/chap4/node1.html

- [3]http://www.ece.tamu.edu/~chmbrlnd/Courses/ECEN601/ECEN601-Chap3.pdf

Wikipedia, the free encyclopedia © 2001-2006 Wikipedia contributors (Disclaimer)

This article is licensed under the GNU Free Documentation License.

Last updated on Tuesday October 07, 2008 at 14:06:44 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

This article is licensed under the GNU Free Documentation License.

Last updated on Tuesday October 07, 2008 at 14:06:44 PDT (GMT -0700)

View this article at Wikipedia.org - Edit this article at Wikipedia.org - Donate to the Wikimedia Foundation

Copyright © 2014 Dictionary.com, LLC. All rights reserved.