# Approximation for Joint PDFs

Here, we first illustrate the development of the joint pdf, of the form, pz (). The method is based on the formulation developed by Naess (1985). Next, based on an earlier study (Gupta and van Gelder, 2005), we show the development of an approximation for pz z Z2 (■).

Scalar Case

We rewrite pZ Zj (■) in the form

where Px2…xnZZ 1 (‘) is the joint pdf of random variables X2,. ..,Xn, Z1 and Z1, at time t. Using the standard technique of transformation of random variables, we seek the transformation between the joint pdf px xnZZ () and PXl XnZl (). In order to achieve this, we assume that at time t, Z1 in Equation (7), is a function of X1 with all other random variables being fixed. We assume that there are k solutions for X1 for the equation z1 = g[X1,x2 …, xn], for a given set of values for Z1 = z1, X2 = x2,…,Xn = xn. This leads to the expression

Here, k depends on the form of the function g[-]. The joint pdf pXi Xnz (■) can now be written as

PX1…XnZ1 …,xn, z) = PZ1X1…Xn(zX1 = x1,…,Xn = ^PX^Xn^U…,xn). (16)

Here, pX1…Xn(x1,… ,xn) is the n-dimensional joint Gaussian pdf and is completely specified if the mean and the covariance matrix of the vector Gaussian process is known. To determine the conditional pdf pzt X1…Xn(0, we first write the time derivative of Z1(t) from Equation (7), and when conditioned on {Xj = xj }n=1, is given by

Here, G = [g1,…, gn], X = [X1,…, Xn]’, the superscript (‘) denoting transpose and gj = dZ]_/dXj, and when conditioned on X, is a constant. Xj(t) are the time derivatives of Xj(t) and are thus, zero-mean, stationary, Gaussian random process. Since Z1 X is a linear sum of Gaussian random variables, Z1 X is Gaussian, with parameters

Pz1x = G{X) = 0,

aZ1 X = G{XX*)G’ = GCxxG’. (18)

Here, Qj denotes the domain of integration determined by the permissible set of values x2,…,xn for each solution of xj). Since p^ |X(-) is Gaussian, it can be shown that (Naess, 1985)

 Here, * denotes complex conjugation. Substituting Eqs. (14-16) to Equation (11), we get k

^Z 1 |x

О 1 |x where ^(x) = ф(x) + xФ(x), ф(x) and Ф(x) are respectively, the standard normal pdf and PDF. Without loss of generality, X can be assumed to be a vector of mutually independent Gaussian random variables. When X are correlated, appropriate linear transformations can be applied to make X mutually independent. These linear transformations, however, result in a new definition for the function g[-].

Equation (19) can now be expressed as

 -v I ^) №(*!’■>). oZ 1 |x

where

The difficulties involved in evaluating Equation (21) are: (a) in determining the domain of integration Qj, defined by the possible set of solutions for X(j), and (b) in evaluating the multidimensional integral. A recently developed numerical algorithm is used to overcome these difficulties. This has been discussed later in this paper.

Vector Case

We now focus on developing models for pz z % Z2 (■). As in the scalar case, we rewrite pZ1Z2Z1Z2 (Z1,Z2,Z1,Z2)

 /

0 p 0

-0 J—0

where the dimension of the integrals is (n – 2). The joint pdf pXi…XnZlZ2zxZ2 is rewritten as

k

pX3…XnZ1z2Z1Z2 = E |J|1 pX1…XnZ1Z2 (x1^ ,x2^, x3,…, xn, Z1, Z2), (23)

j=1

where, for fixed values of X3,…,Xn, Z1 and Z2, there exist k solutions for X1 and X2, and Jj denotes the Jacobian matrix

evaluated at (x[j x^). As before, we now rewrite

Pxi…xnZiZ2(x|;),x2;),x3, ■■■,xn, гг, zr, h, t2) = Pz1z2|x(zt, z2|x; h, t2)px(x), (25)

where px(x) is the n-dimensional Gaussian pdf. The time derivatives for Zi(t) and Z2(t), condi­tioned on X, is expressed as

Here, gj = Sg/SXj, hj = dh/dXj, evaluated at X = x and G = [gb…, gn], H = [hi,…, hn]. Since G and H are constants and X(t) constitutes a vector of zero-mean stationary, Gaussian random processes, Z 1(t) and Z2(t), when conditioned on X, are zero-mean, stationary Gaussian processes. The joint conditional pdf p^ z2|X(Zb z2|X; t1, t2) is therefore jointly Gaussian and is of the form

Here, w = [zi, z2]/, A = A(ti, t2) = TCx(ti, t2)T/, the operator | ■ | denotes the determinant of a matrix, T = [G, H]’ and Cx(ti, t2) is the covariance matrix (X(^)X(t2)*). Without loss of generality, it can be assumed that X(t) constitutes a vector of mutually independent, stationary, Gaussian random processes. This leads to Cx(ti, t2) = Cx(t) being a diagonal matrix, where т = t2 – ti.

Substituting Equations (22-28) into Equation (i2), and rearranging the order of integrations, we get

X pxi (xj)px2 (x(/})px3…xn(x3,…, Xn)dx3… dxn

where

(30)

The above integral can be evaluated using symbolic software MAPLE or numerically evaluated. Subsequently, the inner integral in Equation (29), with respect to т, is carried out numerically. The remaining (n — 2) dimensional integrals can be evaluated using the numerical algorithm described later in this paper.