Galerkin Method

The Galerkin solution of Equation (1) requires to discretize the stochastic dimension, that is, the random variables (X, Y, Z). Polynomials chaos representations and partitions of the sample space ^ x ^2 or of the range of the random variables (Y, Z) can be used to discretize the stochastic di­mension of Equation (1). Galerkin method based on polynomial chaos has been applied successfully to solve a broad range of stochastic problems (Ghanem and Spanos, 1991), although there are some theoretical aspects of the method that remain to be clarified. For example, the m. s. convergence of polynomial chaos representations for Y; Z to Y; Z does not guarantee the m. s. convergence of the corresponding representations for X to the exact solution X. Also, moments of order 3 and higher of polynomial chaos representations may not converge to corresponding target moments (Field and Grigoriu, 2004). Galerkin method using partitions of the range of the random variables (Y, Z) views the solution X as an unknown function of (Y, Z), that can be approximated by polynomials or other functions depending on some unknown coefficients. The solution of Equation (1) is found by solving a deterministic version of this equation obtained by viewing Z as a parameter taking values in Z(^). The measure on Z(i2]^) is the density of Z rather than the Lebesgue measure (Babuska et al. 2004).

The version of the Galerkin method considered here is based on partitions of the product sample space ^ x ^2. Let

(0,ttk) = Gk,1 C—C Gk, i C—C Gk,"k = Fk, k = 1, 2, (3)

be two sequences of sub-a-fields on probability spaces (Qk, Fk, Pk), k = 1, 2, that can be constructed from, for example, finite partitions of the sample spaces Qk, k = 1, 2. Let {Aq}, q = 1,… ,m, and (Гг}, r = 1,… ,m’,be measurable partitions of the sample spaces ^ and ^2, respectively. The a-fields generated by the sets [Aq}, {Гг}, and [Aq x Гг} are of type %1q, G2,j, and G,i ® G2,j in Equation (3), respectively. In the reminder of this section we define optimal and sub-optimal Galerkin solutions and give some of their properties.

Property 1. The optimal Galerkin solution corresponding to the information content of a sub-a – field ® G2,j is

Xi, j = E[X | £M® G2,j] = E1[e(Z) | £M] E2[Y | G2,j] (a. s.),

where Ek and E denote expectations with respect to the probability measures Pk, к = 1, 2, and P1 ® P2, respectively.

Generally, X = (a + r(Z))-1 Y = в(Z) Y is not G1,i ® &2,j-measurable for i < n1 and/or j < n2, that is, it is not a random variable with respect to the о – field Gq ® G2,j. The first equality in Equation (4) follows from the fact that the conditional expectation is the best mean square estimator for X with respect to the information content of Gq ® &2,j (Grigoriu, 2002: section 2.17.2). The validity of the second equality in Equation (4) results from properties of the conditional expectation, properties of о-fields on product spaces, Fubini’s theorem, and a theorem by Dynkin (Resnick, 1998: section 2.2).

Consider the special case in which G2,n2 = F2. The corresponding optimal Galerkin solution is

Xi, n2 = E[P(Z) | Gu] Y (5)

since E[Y | G2,n2] = Y a. s. (Equation (4)). This solution is used extensively in applications (Deb et al., 2001). Once the conditional expectations E[fi(Z) | G1,i] have been calculated, Equation (5) can be used to calculate statistics of the exact solution X approximately.

Property 2. The optimal Galerkin solution in Equation (4) ranges from the expectation of the exact solution to the exact solution depending on the information content of the sub-о – fields Gij and

G2,j ■

Consider sub-о – fields of Fk containing limited or full information on the random variables Y and Z. The corresponding optimal Galerkin solutions are (Equation (4))

Xx, x = Ex[e(Z)] E2[Y] = E[X]

X 1,n2 = E1[e (Z[ Y Xnb1 = в (Z)E2[Y ]

Xn1,n2 = в(Z) Y = X, (6)

where the above equalities hold almost surely with respect to the product probability measure Pi ® P2. The above results follow from properties of the conditional expectation (Grigoriu, 2002: section 2.7.2). For example, E1[e(Z) | G1,i] is equal to E1[e(Z)] and в(Z) a. s. for i = 1 and i = n1, respectively. Also, Xn1,n2 is equal to the exact solution X a. s. since the sub-о-fields Gk, nk coincide with the о-fields Fk, so that we have E^Z) | Gnx ] = в(Z) and E2[Y | G2,n2] = Y a. s.

Property 3. The second-moment properties of the optimal Galerkin solution are = E[Xi, j ] = E[X]

Yt, j = E[{Xl, J – E[Xi, j]){Xi, j – E[Xi, j])Г]

E1 [в(Z) | G1,i]E2[Y Y ]e^в(Z) | Ghif

with the notation

Y = E2[y | G2,j] – E2[y]

в(Z) = Е1[в(Z) | G1,i] – Е1[в(Т)].

The expectation of Xi, j is E[E[e(Z) | Gu]} E2{E2[Y | G2,j]}, and these expectations are equal to EiiPiZ)] E2[Y] = E[@(Z) Y] = E[X] by properties of the conditional expectation and the independence of Z and Y. Hence, the optimal Galerkin solution Xi, j in Equation (4) is an unbiased approximation for the exact solution X = в (Z) Y.

We have Xi, j — E[Xi, j] = E[fi(Z) | Gu] Y + в(Z) E2[Y] with the notation in Equation (8). The definition of the covariance matrix у * j, the independence of Z and Y, and properties of the conditional expectation give the second relation in Equation (7). If G2,j coincides with F2, then Y in Equation (8) becomes Y – E2[Y].

In the remainder of the paper we denote a-fields of the type Gu, G2,j, and G1,i ® G2,j by G1, G2, and G = G1 ® G2 for simplicity. Generally, we chose the a-fields Gb G2, and G to be coarser then F1, F2, and F1 ® F2, that is, we have G1 C F1, G2 C F2, and G C F1 ® F2. Accordingly, the optimal Galerkin solution with respect to G is (Equation (4))

^ = E[X | G] = E1[e(Z) | G1] E2[Y | G2]. (9)

We also consider Galerkin solutions that differ from E[X | G]. These solutions are referred to as sub-optimal Galerkin solutions, and are also denoted by XX. Generally, sub-optimal Galerkin solution are biased approximations of the exact solutions. The error of a sub-optimal Galerkin solution XX is

X – X = (E[X | G] — x) + (X — E[X | Gf) • (10)

The first term in Equation (10) is the error of the optimal Galerkin solution, and cannot be reduced for a given G. The second term in Equation (10) corresponds to the difference between the optimal and a sub-optimal Galerkin solutions. This component of the error can be reduced by improving the sub-optimal solution.

Property 4. If {Aq}, q = 1,… ,m, is a partition of ^1, G1 = a({Aq}), and G2 = F2, then the mean, the correlation, and the distribution of the corresponding optimal and sub-optimal Galerkin solutions XX have the expressions:

m

E[ X] = £ E2 [W q]P1( Aq)

q=1

m

E[X XT] = J2 E2[Wq WT] P1(Aq)

q=1

m

P(X i1 < Ї1,…, X is < = £ P2( Wq, i1 < Ї1,…, Wqi < &) P1 (Aq), (11)

q=1

where W = uq Y is a vector in with coordinates Wq, j, j = 1,… ,d, and

We have

by properties of the conditional expectation and Equation (5). The above sub-optimal Galerkin solu­tion corresponds to an approximate representation of Z setting this variable constant and equal to zq in each Aq, that is, Z is approximated by the simple random variable Z = 5^= zq A, where 1Aq denotes the indicator function for Aq defined by 1Aq(rn1) = 1 and 0 for ю1 є Aq and rn1 / Aq, respectively.

The results in Equation (11) follow from Equation (4), by the linearity of the expectation operator and the law of total probability. If Y = z is a deterministic vector, then E2[Wq] = wq = uq z,

E2[W q WT ] = Wq wT, and P2(Wqi < %1, .. . , Wq,(s < fs) = 1( Wq, h < ^1,…,Wq, is < fs).

If Y is a random vector, then E2[Wq] = uq E2[Y] and E2[Wq WT] = aq E2[Y YT] aTq and the probabilities P2(Wq, i1 < f1, simulation.