Problem: A Model with Correlated Inputs
Let’s imagine you have a simple engineering model that takes two uncertain inputs, and .
Now, suppose your experimental data tells you that and are correlated. For instance, they might both be related to the ambient temperature. We can describe this relationship using a covariance matrix.
Let’s say the inputs are Gaussian random variables with a mean of zero and the following covariance matrix :
The non-zero off-diagonal terms (0.8) indicate a strong positive correlation between and .
PCE Challenge
You want to build a PCE for , but the standard method requires the inputs to be independent. You can’t just create a basis by multiplying 1D Hermite polynomials for and because their correlation complicates their joint probability distribution.
Solution: Applying the KL Expansion
The KL expansion transforms our correlated vector into a new vector whose components are uncorrelated standard normal random variables.
The transformation is given by the formula from the notes:
Since our mean is zero, this simplifies to . The matrix is calculated from the eigenvalues and eigenvectors of the covariance matrix .
-
Eigendecomposition of : We find the eigenvalues () and eigenvectors () of our covariance matrix. For the matrix above, this gives:
- ,
- ,
-
Construct the Transformation: We use these to build the matrix and find the relationship between and :
This gives us our original variables as a function of the new, uncorrelated ones:
Build PCE with the new random variables
Now we have a new set of variables, and , which are independent and follow a standard normal distribution. We can substitute their expressions back into our original model:
This defines a new, equivalent model that is a function of independent variables.
Now, building the PCE is simple: Because and are independent normal variables, we can use a standard PCE basis built from products of 1D Hermite polynomials ():
The coefficients can now be calculated easily using standard methods like quadrature or non-Intrusive Projection that work perfectly for independent variables.
The KL expansion served as the essential data preprocessing step that “unlocked” the problem, allowing us to use the powerful and efficient machinery of Polynomial Chaos Expansion.