Reduced Basis Method
General Philosophy
It is worth to spend high computational costs to get ''good approximation'' subspaces. These subspaces are built hierarchically in a greedy manner until we satisfy some tolerance. At the end we have an approximation subspace,
where are called snapshots and
solves (2) for
. The
are chosen using the Greedy Algorithm. If we want the solution to (2) for any
we employ a Galerkin Projection onto
,
Three Key Ingredients
- Training Set
- Greedy Algorithm
- A posteriori error bound
Training Set
In our project we always start off by picking a training set , consisting of
points from parameter space
. We run the greedy algorithm on this set. The training set needs to be easy to compute without too many useless samples in order to avoid unnecessary computation but on the other hand it must be sufficient to capture most representative snapshots.
Greedy Algorithm
Suppose we are given a training set , a sample set
and a reduced basis space
. We seek to pick
and build nested reduced bases spaces
in a greedy manner by solving the following optimization problem: For
find,
where is the a posteriori error bound (see below). We then add
to the sample to get
and augment our basis space to get
. We finally orthonormalize the
's in
, where
.
A posteriori error bound
Let be the true error. Then we have the following bound,
where is the lower bound for the coercivity constant and
is given by the formula,
where,
and and
are such that
and
, for all
,
and
are the orthonormalised snapshots.