Optimization Scheme

Gradient derivation

The evaluation of the gradient of the objective function is trivial since this function depends on the design variables explicitly, and is given by Vc J = 1. The evaluation of the gradients of the

constraints, however, is not an easy matter since the constraints depend on the design variables through differential equations. It is achieved indirectly by formulating the problem in state-space notation and using optimal control theory.

The optimization problem (Eq. 5) is reformulated in terms of a single constraint on maximal values as:

. . . . T

mmmimize: J = • 1

subject to: (6)

pi = max(max(Eh (tf ^ max(dm )) < 1.0, equatlonsof motion and 0 < c^ < c^ max

where pi = performance index.

A differentiable equivalent of the constraint: Before proceeding formally with the gradient formulation it is necessary, since use is made of variational approach, to replace the max function on t in Eq. 6 by a differentiable function.

Differentiable equivalent of d m. It is proposed to use a norm of the p-type differentiable function as an equivalent to dm = maxLbsfo-1 (d aH )• H xx(t))).Thus, dm takes the form:

where p = a large positive even number. It follows that:

Differentiable equivalent of pi. The maximal component of a vector with non-negative entries, z, can be evaluated using a differentiable weighted average of the form:

where Wi =weight of Zi, and q = an index. When q is large, say q=p, and the components of z are

used as their own weights, i. e. Wi = zi, this weighted average approaches the value of the maximum component of z. Since Eh {tf) and d m are normalized quantities pi can be written as pi = max(Eh [tf ) dm j and reformulated Eq. 9 as:

. 1T ■ Dq+1 (Eh (t f ))■1 + 1T ■ Dq+1 (dm )■1 (10)

Pl 1T • Dq (eh (tf ))■ 1 + 1T • Dq (dm )• 1

Substituting Eq. 7 yields:

where X = jx T IT XT XEh X Xmp }T • Thevariation of the augmented function, with tf specified results in

Taking the first three variations as arbitrary results in the following three differential equations and boundary conditions to be satisfied:

The multiplier of the variation Scd will yield the expression for the evaluation of the gradient Vc g(y(tf )). This expression becomes:

SJ =J ><5c = tf^l?(T(E):a(y(t)i£d1t)|

a ^ d ^ J /•*

^cd ^d to dcd

which is the desired gradient since

v cd g (y (t f ))=j – [g (y(tf))]=

d ^c d

The gradient of the constraint in Eq. 12, can now be evaluated from:

and the following set of differential equations and boundary conditions:

k x [tf )- 0; k v [tf )- 0; ^ fh [tf )- 0

Eh {tf )=—iden -(q+!)-D 9 (E h )•! – num -(q)-D q4 (E h l’1)

Іx(0 = (m-1 ■ K“)T ■ Xv(0 + (- pHx ■ Dp-1(Hx ■ x(t))D-p(da11 )■ Xxmp (O)

^v^) = -’kx(t) + (M 1 -[c + Cd(cd)]) – Xv(?)“^—^ + + (-Bxf ■ D(fh(0)’D-1 (Eh11 )■!Eh(/))

І fh W = (m-1 – Bfx )T – X v. f (,) +

+ (-d(bxf ■ v(/))-D-1 (Eh11 )-kEhft))

^ Eh M = 0; ^ xmp (?) = 0

X(/) = v(t) ; x(o) = 0

v(t) = M -1 (-[C + Cd (cd )] ■ v(t) – K“x(t) – Bfxfh (t)- M ■ e ■ ag (t)) ; v(o) = 0 fh (t ) = f (v(t ),f h (t)); f h (0) = 0 Eh(t) = D-1 (Eh11 )-D(fh (t))-(bxfv(t)) ; Eh (o) = 0

d m, p =(d 4 (d all )’D(H x ■ x(t)f ■ 1 ; x m, p (0) = 0

q+1

where num = 1T-Dq+1 ^Eh{tf ))-1 + 1T • D p |dmp{tf )|1

q_

den = 1T • Dq (Eh (tf 11 + 1T • Dp (dm, p (tf ))• 1.

Equations 23 return the equality constraints; Eqs. 21 and 22 give expressions for the evaluation of the Lagrange multipliers, Xx, Xv and Xxmp which are needed for the evaluation of Eq. 20. Now since the elements of CdT (cd) are linear combinations of the elements of Cd, the differentiation of CdT (cd) with respect to cd i (also needed in Eq. 20) is rather simple and easily programmed. The computation of the gradient for a single record is summarized as follows:

Step 1: Solve the equations of motion (Eq. 23).

Step 2: Solve the equations of the Lagrange multipliers (Eq. 21 with conditions for tf in Eq. 22). Step 3: Calculate the desired gradient (Eq. 20).

Optimization scheme

The gradients of the objective function and the constraints are needed at each iteration for first order optimization schemes. Thus the solution requires a time history analysis for each record (constraint) at every iteration cycle. In order to reduce the computational effort, optimization is first carried out for one “active” ground motion (loading condition), rather than for the whole ensemble. If the optimal solution for this ground motion violates other records in the ensemble, additional ground motions are
added one at a time (Stage 4 below). Following are the main steps in the methodology that are used for the optimization scheme.

Step 1: Select the “active” ground motion. The record with the maximal displacement is selected to begin the process. It is evaluated from a SDOF with the 1st period of the undamped structure within the expected total damping ratio range.

Step 2: Compute an initial starting value for the damping vector. The starting point is evaluated by first assuming a distribution of equal dampers for the damping vector. Then this damping vector is factored so as to satisfy pi = 1.0 where pi is computed from a time history analysis of the frame excited by the “active” ground motion of Stepl.

Step 3: Solve the optimization problem for the active set of records. An appropriate gradient based optimization scheme is used. The gradients are evaluated as described above. If more than one record is “active”, say two, then the gradients are calculated separately for each record and the size of the problem doubles.

Step 4: Feasibility check. A time history analysis is performed on the optimally damped structure for each of the remaining records in the ensemble, separately. One new ground motion is added to the active set only if its pi is largest and greater than 1.0. Then Step 3 is repeated.

Step 5: Stop.