University of California, Los Angeles Department of ... - UCLA Statistics

Using matrix notation the previous system of equations can be written as. CW = c. Therefore the weights w1,w2, ..., wn and the Lagrange multiplier λ can be ...
68KB Sizes 2 Downloads 117 Views
University of California, Los Angeles Department of Statistics Statistics C173/C273

Instructor: Nicolas Christou

Ordinary kriging in terms of the covariance function The model: The model assumption is: Z(s) = µ + δ(s) where δ(s) is a zero mean stochastic term with variogram 2γ(·). The Kriging System The predictor assumption is ˆ 0) = Z(s

n X

wi Z(si )

i=1

It is a weighted average of the sample values, and wi ’s are the weights that will be estimated.

Pn

i=1

wi = 1 to ensure unbiasedness. The

Kriging minimizes the mean squared error of prediction ˆ 0 )]2 min σe2 = E[(Z(s0 ) − Z(s or "

min

σe2

= E (Z(s0 ) −

n X

#2

wi Z(si )

i=1

For second order stationary process the last equation can be written as: σe2 = C(0) − 2

n X

wi C(s0 , si ) +

i=1

n n X X

wi wj C(si , sj )

i=1 j=1

See next page for the proof:

1

(1)

Let’s examine (Z(s0 ) −

Pn

i=1

wi Z(si ))2 : z(s0 ) −

n X

!2

wi z(si ) + µ − µ

=

i=1

(

[z(s0 ) − µ] −

n X

)2

wi [z(si ) − µ]

=

i=1

[z(s0 ) − µ]2 − 2

n X

wi [z(si ) − µ][z(s0 ) − µ] +

i=1

n X n X

wi wj [z(si ) − µ][z(sj ) − µ] .

i=1 j=1

If we take expectations on the last expression we have E [z(s0 ) − µ]2 − 2

n X

wi E [z(si ) − µ][z(s0 ) − µ] +

i=1

n X n X

wi wj E [z(si ) − µ][z(sj ) − µ]

i=1 j=1

The expectations above are the covariances: C(0) − 2

n X

wi C(s0 , si ) +

i=1

n X n X

wi wj C(si , sj )

i=1 j=1

Therefore kriging minimizes σe2 = E[(Z(s0 ) −

n X

wi Z(si )]2 =

i=1

C(0) − 2

n X

wi C(s0 , si ) +

i=1

n X n X

wi wj C(si , sj )

i=1 j=1

subject to n X

wi = 1

i=1

The minimization is carried out over (w1 , w2 , ..., wn ), subject to the constraint Therefore the minimization problem can be written as: min C(0) − 2

n X i=1

wi C(s0 , si ) +

n n X X

wi wj C(si , sj ) − 2λ(

n X

wi − 1)

Pn

i=1

wi = 1.

(2)

i=1

i=1 j=1

where λ is the Lagrange multiplier. After differentiating (2) with respect to w1 , w2 , ..., wn , and λ and set the derivatives equal to zero we find that 2

n X

wj C(si , sj ) − 2C(s0 , si ) − 2λ = 0, i = 1, ..., n

j=1 n X

wj C(si , sj ) − C(s0 , si ) − λ = 0, i = 1, ..., n

j=1

and n X

wi = 1

i=1

2

Using matrix notation the previous system of equations can be written as CW = c Therefore the weights w1 , w2 , ..., wn and the Lagrange multiplier λ can be obtained by W = C−1 c where W = (w1 , w2 , ..., wn , −λ) c = (C(s0 , s1 ), C(s0 , s2 ), ..., C(s0 , sn ), 1)0     

C(si , sj ), 1, C= 1,    0,

i = 1, 2, ..., n, j = 1, 2, ..., n, i = n + 1, j = 1, ..., n, j = n + 1, i = 1, ..., n, i = n + 1, j = n + 1.

The variance of the estimator: ˆ 0 ) = Pn wi Z(si ). So far, we found the weights and therefore we can compute the estimator: Z(s i=1 How about the variance of the estimator, namely σe2 ? We multiply n X

wj C(si , sj ) − C(s0 , si ) − λ = 0, i = 1, ..., n