|
|
|
|
What is Optimization? |
|
Finding the best solution under a given set of
conditions |
|
Finding the highest or lowest values of a
function |
|
Example:
Minimize the weight of a part |
|
Example:
Minimize the cost of a part |
|
Example:
Maximize the force transmission of a device |
|
Example:
Find the shortest route to a destination |
|
|
|
|
|
|
We need some kind of function to describe what
we are trying to achieve |
|
Minimize weight |
|
f = Volume*r |
|
density is a material property—if the material
is selected, then r will not be changed |
|
Volume = f(L,h,d), Length, height, depth |
|
Certainly we could make L, h, and d very small
to make the weight small, or minimum |
|
|
|
|
|
|
The function which represents what we are trying
to optimize is called the objective function |
|
F(x)= L*h*w*r |
|
x is a vector |
|
x1 = L |
|
x2 = h |
|
x3 = w |
|
x1, x2, and x3
(L,h,w) are called design variables |
|
|
|
|
|
|
We could make L, h, and w, very small to reduce
weight—this would be called the trivial solution to the problem |
|
Most applications would require a “window” of
values on parameters |
|
For example, L must not exceed Lmax
and must not be less than Lmin, |
|
Lmin < L < Lmax |
|
|
|
|
The window of values we just placed on L are
called constraints |
|
Most likely, h, and w would have constraints
imposed as well |
|
There are equality constraints, and inequality
constraints. |
|
|
|
|
|
|
Constraints bound the solution domain |
|
Within this domain there may be many optima, local
optima, or there may be one solution that is the absolute best —the global
solution |
|
Depending on what kind of technique we use to
find the optimum value of our function, we may or may not find the global
solution |
|
|
|
|
|
This problem is called a “single variable”
problem, because there is only one design variable. |
|
we found Tension as function of distance along
the pole |
|
We know from calculus that we if we find the
place in a domain where the first derivative of a function is zero, and the
second derivative is positive there is a minimum of the function |
|
|
|
|
|
|
|
|
|
The mathematical necessity of the “first
derivative” being zero becomes that the gradient of the function must be
zero |
|
How is a gradient determined and what is its
interpretation? |
|
|
|
|
|
|
|
|
|
The gradient is a “line” that points us to an
optimum |
|
|
|
|
|
|
It is the Hessian matrix and whether or not the
determinant of the Hessian matrix is positive or negative |
|
|
|
|
|
|
|
The necessary condition for identifying an
optimal value becomes: |
|
The gradient of the “Lagrangian” function must
become = 0 |
|
L(x,l) = f(x) + l*g(x) |
|
l is called the Lagrangian multiplier |
|
f(x) is the function we are optimizing |
|
g(x) are the constraint equations |
|
|
|
|
|
|
L(x1,x2,l) = f(x1,x2)+l*g(x1,x2) |
|
|
|