#### Installation — business terrible - 1 part

September 8th, 2015

If Z is a function of 5 independent random variables, we replace each of the original probability distributions of the random variables by к probability concentrations whose locations and magnitudes are determined by the probability concentration method discussed in the previous section. Therefore, for the i-th random variable X-„ і – 1,2, —, 5, we have a “discrete” probability distribution function with locations and probability concentrations represented by Xy and py, j = 1,2, —, к. For к equal to 5, a schematic representation of the cumulative distribution function for the i-th random variable based on the above discretization is shown in Figure 1.

Probability

concentration

b) Partition and samples for n = 8 Figure 1. Schematic representations.

Now, for the i-th random variable we partition the range of the cumulative probability values (i. e., from 0 to 1) into mutually exclusive and collectively exhaustive n intervals each having a value Pj, m, m = 1,2,—, n. This partition is based on the discrete representation of the probability

distribution function defined by (xj pj i -1,2, — , s, and j = 1,2, — , к. For example, for n equal to 8 and к = 5, this partition is illustrated in Figure 1 again for the i-th random variable. Since there are s random variables, the sample space is partitioned into ns cells (hypercubes) each with probability Plm1 x P2 m2 x P3 m3 x — x Psms where m 1 to ms can take values from 1 to n.

Given a random variable, for each of the partition, we randomly generate a sample. Let sxim denote the randomly generated sample from the partition defined by Pi, m, m -1,2,—, n, for the i – th random variable. The selected values л, ш, m -1,2,—,n, must coincide with one of the Ху, j = 1,2, — ,к. Again, this is graphically illustrated in Figure 1 (in this particular case sXi,1 = xu, sXi,2 = sXi,3 = Xi,1, sXi,4 = sXi,5 = Xi,3, sXi,6 = sXi,7 = Xi,4, sXi,8 = x^). As in the LHS technique, we use these samples sxim, where i – 1,2,—, s and m – 1,2,—,n, to form n samples xj, j – 1,2,—, n, in the s-dimension sample space. That is, the first sample x1 is formed by randomly selecting a value from the n values for each of the random variables. This results in x1 = (sx1k1,sx2k2,—,sxsks) and the probability associated with the cell from which the sample x1

was obtained, pb equals P1k1 x P2k2 x P3k3 x — x Psks where each of к1, к2, •••, ks, takes a values

from 1 to n. The second sample x2 and the probability associated with the cell from which the sample x2 was obtained, p2, are formed in the same way but based only on the remaining n-1 values for each of the random variables. This process is continued until xn and pn is formed. Since the samples obtained in this way represent corners of the Hypercubes formed by the Point Concentrations we will refer this sampling technique as the HPCS technique.

We can use these samples to estimate the expected value of Z, E(Z) from the statistic, S, defined by

S = E n"1 Pjh( xj), (12)

j=1

where Pj is the probability associated with the cell from which the sample xj was obtained.

To show that the use of the above in estimating E(Z) is adequate, one can shown, following a similar proof given by Iman and Conover (1980), that S is an unbiased estimator of E(Z) when Z = h(x) is a polynomial of degree less than 2k-1 if к concentrations are employed to replace the probability distribution function.

Note that dependent random variables can be transformed into independent random variables by using the Rosenblatt transformation, and that if only the correlation coefficients between the dependent random variables are available (i. e., incomplete information), one could use the Nataf translation system to transform the correlated random variables into uncorrelated random variables (Madsen et al. 1986). For each of the independent random variables if its moments exist we can calculate Ху and py of the probability concentrations using Eq. (7). Alternatively, to avoid the evaluation of xij and pij we can transform the independent random variables into uniform, beta, exponential, gamma and/or normal variates since for these distributions xij and pij are readily available (see Table 1). In the transformed space, we can use the proposed HPCS technique to carry out the probabilistic analysis.