Showing changes from revision #13 to #14:
Added | Removed | Changed
Spectral methods are methods for the numerical approximation ofdiscrete approximation schemes for partial differential equations. They are important for the solution of the Navier-Stokes equations in meteorology and in climate models.
The basic idea of spectral methods is to choose a finite set of functions and calculate the optimal approximation of the exact solution by these functions. These basis functions are often part of an orthonormal basis of a Hilbert space, and more specifically trigonometric functions, hence the name “spectral” methods. In contrast to finite elements, spectral methods usually refer to an approximation with basis functions that don’t have a restricted support, but are “globally nontrivial”.
For parabolic exquations with a notion of time, spectral methods are usually used to discretize the spatial dimensions only, not the time dimension.
The following paragraph is meant as an introduction to the method for pure mathematicians with a background in functional analysis.
For illustrative purposes we will make some simplifying assumptions. Let’s assume that we have an infinite topological vector space T, its topological dual $T^*$ and a (closable densely defined differential) operator
with a unique solution of the equation
We omit initial and boundary conditions for the moment. In order to calculate an approximation to the exact solution $f$, we need to turn the infinite dimensional problem to a finite dimensional one.
The basic idea of spectral methods is to choose a finite dimensional subspace $T_g$ of T spanned by a given set of functions $\{g_1, ..., g_n \}$, which are called in this context trial, expansion or approximation functions. We are looking for the projection of the exact solution $f$ to the subspace $T_g$, but since we don’t know $f$, we cannot calculate the exact expansion
But we can test the goodness of a given approximation $f_{\alpha} := \sum_{k = 1}^n \alpha_k g_k$ by testing $M(f_{\alpha})$ for “smallness”. This function is sometimes called the residual function.
Different spectral methods differ mainly in their interpretation of “smallness” and their strategy to determine the best approximation.
One particular “smallness” test in spectral methods is done via a choice of a finite dimensional subspace $T^*_h$ of the dual space $T^*$ spanned by the elements $\{h_1, ..., h_n \}$, we then demand that
should hold for all $h_i \in T^*_h$. The “functions” $h_i$ are called test or weight functions. Due to the choice of finite dimensional subspaces the problem is reduced to a finite set of (linear) algebraic equations.
Often the function space is a Hilbert space and the approximation functions elements of an orthonormal base. This is the case for example if one uses Fourier series. In some cases it is possible to show that the expansion coefficients of, for example, a smooth function in an orthonormal basis decay exponentially fast. This is sometimes called exponential or spectral accuracy in the literature.
The only difference of spectral and finite element methods is that spectral methods use approximation functions that are not local, while finite elements uses approximation functions with support restricted to certain simple subsets of the domain of the equation one would like to solve.
As a consequence, finite elements can only handle bounded domains. It is possible to combine both approaches and get, for example, the hp-FEM.
Basis functions should be easy to evaluate, both numerically or with pen-and-paper if a step of the overall calculation can and should be performed by hand.
Basis functions should be complete in the sense that the approximation gets “arbitrarily good” with increasing number of approximation functions.
To make the latter precise, one needs a measure of closeness which will usually be a norm of a Banach space of functions $\| \cdot \|$. Then we would like to have a relation like
where $n$ is the number of approximation functions (or, to be more precise, the dimension of the subspace of the topological vector space spanned by the approximation functions).
For practical problems, it would be nice to know that the approximation converges with a certain rate, that it is exponential, for example:
With regard to boundary conditions, there are two possibilites:
One can choose approximation functions that fulfill the boundary conditions element-wise,
one imposes additional constraints on the approximation such that the approximation will fulfill the boundary conditions, even though the approximation functions do not.
The most commonly used approximation functions are
trigonometric functions (i.e. Fourier series),
We will have a look at an ordinary differential equation on the interval $[-1, 1]$ with boundary conditions:
where we have written $u_{xx}$ for $\frac{d^2 u}{d x^2}$.
The boundary conditions are:
This problem has an exact solution which is:
The first step to calculate an approximate solution using the spectral method is to choose the approximation functions, in this example we choose the polynomials $1, x, x^2$. With this choice, the approximation functions themselves don’t satisfy the boundary conditions, therefore we make another choice and choose as approximation
This way, the boundary conditions are satisfied independently of the approximation coefficients $a_0, a_1, a_2$.
The residual function is of course
We choose as test functions three Dirac delta functions, which is equivalent to choosing three points $(x_0, x_1, x_2)$ such that the residual function vanishes at these points. This choice has its own name, it is called the collocation or pseudospectral method.
Choosing as collocation points the tuple $(- \frac{1}{2}, 0, \frac{1}{2})$ leads to a system of three linear equations for the approximation coefficients, with the solution:
The resulting approximation may seem to be crude at first sight, but the following graphic shows that it is actually quite close to the exact solution already:
The choice of the approximation functions and of the test functions are completely arbitrary, so the follow-up question to this example is if there is any way to determine what a good choice would be.
We could have known that $a_2$ would turn out to be zero, for example, because the problem is symmetric with respect to reflection along the y-axis. Without using this knowledge, we spent some computational effort to determine $a_2$, which did not help in making the approximation any better.
A two volume introduction to the theory of the method, starting with the basics:
… and continuing with the explanation of how complex boundary conditions should be handled, with applications to CFD:
Based on the preceding introduction to the theory, the following volume concentrated on the task of implementing concrete algorithms:
An elementary introduction with special emphasis on spherical coordinates (important for GCMs):