Rosenblueth’s point estimate method

The point estimate method is developed to evaluate approximately the moments of Z based on the first few statistical moments of X (Rosenblueth 1975, 1981). The method does not require the knowledge of the probability distribution of X except their statistical moments such as the means, standard deviations, correlation coefficients and skewness coefficients. The method basically replaces the original probability density functions of random variables by probability concentrations with magnitudes of the concentrations and locations determined from the moments of the random variables. In particular, if Z is a function of only one random variable X, and two point concentration (i. e., two-point estimate method) is considered for the random variable X, the locations xj and magnitudes pj of the concentrations are given by (Rosenblueth 1975, 1981), xj = mx +tjax, J =1,2 (4)


Pj = (- 1У £3- j { W1 + /2)2 ), j = 1,2, (5)

where <A=1X /2 + (-1)3~’д/1 + (lX /2)2 and j = 1,2, and pX, cX and XX are, respectively, the

mean, standard deviation, and coefficient of skewness of X. Using these concentrations the mean of Z, for example, can be approximated by,

E(Z) ph(x,), (6)


where n = 2. If Z is a polynomial of degree less than 4, Eq. (6) will provide an exact mean of Z.

In general, if n concentrations are considered, xj and pj, j = 1,2, — , n, of the concentrations can be obtained by solving the following 2n equations,

“ 1


– 1 "





” Xn




, (7)

t 2n-1

L(X1 )

( 2n-1

X2 )

t 2n-1

” Xn) J

_pn _

m2n-1 _

where mj is the j-th moment of X with respect to the origin. In such a case, the use of n concentrations matches the first 2n-1 moments of the random variable X. Therefore, if Z is a polynomial of degree < 2n-1, the estimated mean of Z by using Eq. (6) with n concentrations is exact.


Consider the case that Z, Z = h(X), is a function of a random variable X. We discretize the domain of X into two mutually exclusive and collectively exhaustive intervals 7i, i = 1,2. The probability of the i-th interval pn is given by pH = P(X є Ц). According to the LHS technique we randomly select a sample xsi, i = 1,2, from I and calculate the expected value of Z using,

E(Z) phXi), (8)


where n = 2. This approximation, which uses only two samples, is unlikely to be accurate or satisfactory even for when h(X) is a linear function ofX. Clearly, we can overcome this by using Rosenblueth’s two-point estimate method but with the following “sampling” interpretation so that we can extend it later. We replace the original probability distribution function of X according to Rosenblueth’s two-point estimate method leading to xj and pj, j = 1,2, given by Eqs. (4) and (5). We divide the space into two intervals such that the cumulative probability distribution function

?—1 i

for the i-th interval Ii varies from ^pj to ^pj where p0 = 0, i=1 and 2. Therefore, the

j=o j=o

probability of the i-th interval Ii, pn, equals pi We randomly select a sample xsi from the interval Ii which is equal to the location of the i-th concentration xi obtained according to Rosenblueth’s two-point estimate method. Based on this sampling scheme the approximation to E(Z) calculated by using Eq. (8) is identical to Eq. (6) since p[i,= pi, and xsi, = xi. As already mentioned such an approximation is exact if h(X) is a polynomial of degree less than 4. Therefore, one has judiciously selected the “sample” points for a sampling technique to become much more efficient. It is noteworthy that a similar observation can be made when n samples are employed.

The efficiency of using n point concentrations depends on the efficiency in solving the system of nonlinear equations (Eq. (7)) to find xj and pj, j – 1,2, — , n. The well known approach to solve Eq. (7) (Erdelyi et al. 1953) is to first find the zeros (i. e., xj, j – 1,2, —,n) of
the polynomial ^cix’ = 0, where cn= 1, and cj, j = 0,1,”% n-1, are obtained from,


M2n-2 [C0 C1 ■” Cn-1 J =~[mn mn+1 ■” m2n-1 J, m0 m1 ■” mn-1

m1 m2 ■ – mn

whereMn… . The solution is then used to findp; in Eq. (7).

mn-1 mn ■■■ m2n-2

The solution for xj and pj, does exist and pj are larger than zero. This comes directly from a theorem in the orthogonal polynomials and quadrature formulas which state that (Erdelyi et al. 1953, Stroud and Secrest 1966):

For a non-negative weight functionfx) (in our case it represents probability density function), if m1 exist and |M2n| Ф 0, a unique sequence of orthogonal polynomial {#i(x)}, i = 1,"% n, (except

normalization constants) can be constructed. qn(x) has n distinct roots (abscissas) which lie in the orthogonality interval. Using these n roots we can find the weights pi such that,

jf {x)h( x)dx = ^ pth( xt), (10)

n 1=1 is exact if h(x) is a polynomial of degree < 2n-1. Further, pi are positive.

In other words, the above says that we can find the abscissas xi and the positive weights pi such that,

n n

mk = xkf(x)h(x)dx = ^pixk, or mk pixk for к = 1,2, —,2n-1, (11)

П i=1 i=1

which is equivalent to Eq. (7). Therefore, if some commonly used weighting functions can be transformed into the probability density functions by appropriate normalization constants, the obtained abscissas and weights for the Guassian quadrature formulas can be directly transformed into locations and probability concentrations in the point estimate method. There are three well known classical weighting functions associated with the Jacobi integration, the (generalized) Laguerre integration and the Hermite integration. With appropriate normalization constants, these weighting functions can be transformed into the beta distribution, the gamma distribution and the normal distribution. The correspondences between the abscissas and locations and between the weights and the probability concentrations for these cases are listed in Table 1. Note that the Legender integration is a particular case of the Jacobi integration. Values of the abscissas and weights for these quadrature formulas are tabulated in Stroud and Secrest (1966). They can also be calculated using the algorithms given in Press et al. (1992).

If Z, Z = h(X), is a function of 5 random variables, we can generate n samples according to the LHS technique and evaluate h(x) at these n sampling points in order to estimate the mean of E(Z). The number of random variables 5 does not change the fact that we need to evaluate n times the function h(x). However, if one uses the point estimate method with n concentrations for each random variable, one need to evaluate ns times the function h(x). This can be extremely large and make the point estimate method unattractive. For example, ns equals 9765625 for n = 5 and 5 = 10. In such a case, one is better off by using the LHS technique or any other simulation techniques. It is noted that the point estimate schemes with less number of concentrations that have reported in the literature (see Hong (1998) and the references listed thereafter) may also be employed. However, none of these schemes can appropriately take into account the cross terms of order higher than 3.

In short, the above indicates that the point estimate method is very efficient if we are
interested in a function of only one random variable, while the use of the LHS sampling technique is desirable if the number of random variables is large. In the following we propose a method that takes advantage of both of these methods.