Read Kap17.117.2.pdf text version
Kapitel 17
Mehrgitterverfahren
17.1
17.1.1
A First Glance at Multigrid
Poisson's Equation and Model Problem 1
In this section, we introduce Model Problem 1, Poisson's equation in 2D. The discrete Model Problem 1 is the classical model for a discrete elliptic boundary value problem. Every numerical solver has been applied to this problem for comparison. We will study in detail the discrete Poisson equation with Dirichlet boundary conditions
h uh (x, y) = fh (x, y) fh (x, y)
(x, y) h (17.1) (x, y) h = h
uh (x, y) =
in the unit square = (0, 1)2 IR2 with h = 1/n, n IN. Here, Lh = h is the standard 5point O(h2 )approximation (explained below) of the partial differential operator L, Lu = u = uxx  uyy (17.2)
on the square grid Gh . O(h2 ) here means that one can derive consistency relations of the form Lu  Lh u = O(h2 ) for h 0
for sufficiently smooth functions u (u C4 (), for example), i.e. the leading terms in an asymptotic expansion with respect to h are proportional to h2 . To illustrate what the elimination of boundary conditions exactly means, we consider the following example. 199
Mehrgitterverfahren
200
Example 17.1 The discrete Poisson equation with eliminated Dirichlet boundary conditions can formally be written in stencil notation. For (x, y) h not adjacent to a boundary this means: fh (x, y) = f (x, y)
The notation
Lh uh (x, y) = h uh (x, y) 1 = [4uh (x, y)  uh (x  h, y)  uh (x + h, y) h2 uh (x, y  h)  uh (x, y + h)] 1 1 = 4 1 uh (x, y) 2 1 h 1 h 1 1 1 4 1 h = 2 h 1
h
is a first example for the stencil notation. For (x, y) h adjacent to a (here: west) boundary, Lh reads fh (x, y) 1 = f (x, y) + 2 f (x  h, y) h 1 4uh (x, y)  uh (x + h, y) Lh uh (x, y) = h2 uh (x, y  h)  uh (x, y + h) 1 1 = 0 4 1 uh (x, y) . h2 1 h
For (x, y) h in a (here: the northwest) corner we have
Mehrgitterverfahren
201
1 = f (x, y) + 2 f (x  h, y) + f (x, y + h) h 1 Lh uh (x, y) = 4uh (x, y)  uh (x + h, y)  uh (x, y  h) h2 0 1 = 0 4 1 uh (x, y) . h2 1 h fh (x, y) In this example, elimination of the boundary conditions is simple. Often, elimination of boundary conditions may be complicated and not preferable. In such cases, a particular treatment of the boundary conditions in the multigrid process may be needed.
17.1.2
The Two Ingredients of Multigrid
In this section, we introduce the multigrid idea heuristically for Model Problem 1. We use a grid function oriented notation. The iteration formula of the classical lexicographical GaussSeidel method (GSLEX) for Poisson's equation reads um+1 (xi , yj ) = h 1 2 h fh (xi , yj ) + um+1 (xi  h, yj ) + um (xi + h, yj ) h h 4 +um+1 (xi , yj  h) + um (xi , yj + h) , h h (17.3)
with (xi , yj ) h . Here um and um+1 are the approximations of uh (xi , yj ) h h before and after an iteration. If we apply (17.3) to Poisson's equation, we recognize the following phenomenon: After a few iterations, the error of the approximation becomes smooth it doesn't necessarily become small, but smooth: See Figure 17.1 for an illustration of this error smoothing effect. Looking at the error
m vh (xi , yj ) = uh (xi , yj )  um (xi , yj ) , h
Formula (17.3) means
m+1 vh (xi , yj ) =
1 m+1 m v (xi  h, yj ) + vh (xi + h, yj ) 4 h (17.4)
m+1 +vh (xi , yj
 h) +
m vh (xi , yj
+ h) .
Mehrgitterverfahren
202
Error of initial guess
Error after 5 iterations
Error after 10 iterations
Abbildung 17.1: Influence of GaussSeidel iteration (17.3) on the error (Model Problem 1) Obviously, the iteration formula can be interpreted as an error averaging process. Errorsmoothing is one of the two basic principles of the multigrid approach: Smoothing principle: Many classical iterative methods (like GaussSeidel etc.) if appropriately applied to discrete elliptic problems have a strong smoothing effect on the error of any approximation. The other basic principle is the following: A quantity that is smooth on a certain grid can without an essential loss of information also be approximated on a coarser grid, say a grid with double the mesh size. In other words: If we are sure that the error of our approximation has become smooth after some relaxation sweeps, we may approximate this error by a suitable procedure on a (much) coarser grid. Qualitatively, this is the coarse grid approximation principle: Coarse grid principle: A smooth error term is well approximated on a coarse grid. A coarse grid procedure is substantially less expensive (substantially fewer grid points) than a fine grid procedure. As this principle holds for error or "correction" quantities, we also speak of the coarse grid correction (CGC) principle. Let us illustrate these considerations and explain them heuristically by looking at the Fourier expansion of the error. In our model problem the error
Mehrgitterverfahren
203
vh = vh (x, y) considered as a function of the discrete variables x and y can be written as
n1
vh (x, y) =
k, =1
k, sin kx sin y .
(17.5)
For (x, y) h , the functions k, (x, y) = sin kx sin y h (k, = 1, . . . , n  1) (17.6)
are the discrete eigenfunctions of the discrete operator h . The fact that this error becomes smooth after some iteration steps means that the high frequency components in (17.5), i.e. the k, sin kx sin y with k or l large (17.7)
become small after a few iterations whereas the low frequency components, i.e. the k, sin kx sin y with k and l small (17.8) hardly change. The distinction between high and low frequencies is important in the multigrid context. In the following subsection, we will give a first definition of high and low frequencies and point out how this concept is related to the coarse grid under consideration.
17.1.3
High and Low Frequencies and Coarse Meshes
We again consider Model Problem 1 on a grid h with mesh size h = 1/n. Additionally, we consider Model Problem 1 also on a coarser grid H with mesh size H > h. Assuming that n is an even number, we may, for instance, choose H = 2h which is a very natural choice in the multigrid context. This choice of the coarse grid is, therefore, called standard coarsening. For the definition of the high and low frequencies, we return to the eigenfunctions k, in (17.6). For given (k, ), we consider the (four) eigenfunctions k, , nk,n , nk, , k,n and observe that they coincide on 2h in the following sense: k, (x, y) = nk, (x, y) = k,n (x, y) = nk,n (x, y) for (x, y) 2h .
Mehrgitterverfahren
204
This means that these four eigenfunctions cannot reasonably be distinguished on 2h . (For k or l = n/2, the k, vanish on 2h .) This gives rise to the following definition of low and high frequencies: Definition (in the context of Model Problem 1): For k, {1, · · · , n  1}, we denote k, to be an eigenfunction (or a component) of low frequency if max(k, ) < n/2 and high frequency if n/2 max(k, ) < n . Obviously, only the low frequencies are visible on 2h since all high frequencies coincide with a low frequency on 2h (or vanish on 2h ). The fact that high frequencies coincide with low ones is also called aliasing of frequencies. For the 1D case with n = 8, we illustrate the above definition in Figure 17.2. We summarize this consideration: The low frequency components represent meaningful grid functions also on a grid 2h with double the mesh size, whereas the high frequency components do not their high frequencies are not "visible" on the 2h grid.
low frequency components, visible also on 2h
high frequency components, not visible on 2h
Abbildung 17.2: Low and high frequency components for a 1D example (n = 8).
Mehrgitterverfahren
205
If we apply this distinction of low and high frequencies to the representation of vh (x, y) in (17.5), we can decompose the sum in (17.5) into corresponding partial sums:
n1
k, k, =
k, =1
high
k, k, +
low
k, k,
where
n/21 low
k,
k,
=
k, =1
k, k,
and
high
n1
k,
k,
=
k, n/2max(k, )
k, k, .
Remark 17.1.1 (H = 4h and other choices of H) From our definition, it immediately becomes clear that the terms "high" and "low frequency" are related to both the fine grid h and the coarse grid H that we consider. If, for example, we would choose H = 4h (assuming that n is a multiple of 4), our definition of high and low frequencies would have to be modified in the following way: k,l is a (h, 4h) low frequency component if max(k, l) < n/4 . k,l is a (h, 4h) high frequency component otherwise. The choice H = 4h is not very practical in the multigrid context since it usually does not lead to the most efficient algorithms, but it is not out of the question, either. Other, more practical choices of coarse grids different from standard coarsening are considered later on.
17.1.4
From Two Grids to Multigrid
In the following, we will continue our heuristic considerations a little bit further extending them from two grid levels h , 2h to a sequence of levels. In this setting, we will also be able to explain the multigrid (not only the two grid) idea, of course, still very heuristically. Figure 17.3 shows such a sequence of grids.
Mehrgitterverfahren
206
Abbildung 17.3: A sequence of coarse grids starting with h = 1/16. We assume additionally that n is a power of 2 meaning that h = 2p . Then we can form the grid sequence h , 2h , 4h , · · · , h0 (17.9)
just by doubling the mesh size successively. We assume that this sequence ends with a coarsest grid h0 (which may well be the grid consisting of only one interior grid point (x, y) = (1/2, 1/2), i.e. h0 = 1/2, see Figure 17.3). We now consider a decomposition of the error representation (17.5) into partial sums which correspond to the grid sequence (17.9), having the following idea in mind: In the same way as we have distinguished low and high frequencies with respect to the pair of grids h and 2h in the previous section, we now make an additional distinction between high and low frequencies with respect to the pair 2h and 4h . (This distinction, of course, only applies to those frequencies which are low with respect to the pair h and 2h ). We continue with the pair 4h and 8h and so on. As discussed and recognized above by using GaussSeidel type iterations on the original h grid, we cause the (h, 2h)high frequency error components to become small rapidly. The remaining low frequency error components are visible and can thus be approximated on 2h . Of course, an equation which determines the low frequency error has to be defined in a suitable way on 2h . Performing GaussSeideltype iteration not only on the original h grid, but also on 2h and correspondingly on 4h etc., causes the (2h, 4h)high frequency, the (4h, 8h)high frequency components etc. to decrease rapidly. Only on the coarsest grid, we may have to do something special, which is trivial if h0 consists of only one (or few) point(s). All of the information on the errors obtained on a coarse grid has to be interpolated back to the next finer one. Altogether, this leads to a very fast overall reduction of the error. Summary: What we have described and explained so far is the basic idea
Mehrgitterverfahren
207
of multigrid. Our description is not at all an algorithm. It is not even a clear verbal description of an algorithmic principle. For example, we have neither said precisely what a GaussSeidel iteration on 2h , 4h etc. means algorithmically and to which grid functions it is to be applied, nor how to go from one level to the other etc. However, the flavor of multigrid is in it: Flavor of multigrid: GaussSeidel iteration (or more generally, an appropriate iterative scheme) on different grid levels gives rapid reduction of the corresponding high frequency components and as this process covers all frequencies, a rapid reduction of the overall error can be achieved.
17.2
Basic Multigrid I
The multigrid idea is based on two principles: on error smoothing and on coarse grid correction. In this chapter, we will explain how these principles are combined to form a multigrid algorithm. Basic multigrid will be described systematically. In Section 17.2.1, we discuss the smoothing properties of classical iterative solvers. Sections 17.2.2, 17.2.4 and 17.2.6 give a systematic introduction to twogrid iteration, multigrid iteration and the full multigrid method. Some standard multigrid components are described in Section 17.2.3. We prefer a presentation of the twogrid cycle in Section 17.2.2, which starts with the idea of an approximate solution of the defect equation and then brings together smoothing and coarse grid correction. The methods described in Sections 17.2.2, 17.2.3, 17.2.4 and 17.2.6 are general, although all concrete examples refer to Poisson's equation. A Concrete fast multigrid Poisson solver for the 2D case is presented in Sections 17.2.5.
17.2.1
Error Smoothing Procedures
As we have observed for Model Problem 1, the usual GaussSeidel iteration m has a remarkable smoothing effect on the error vh of an approximation um . h As this property is fundamental for the multigrid idea, we discuss smoothing procedures in more detail here. We will, in particular, consider two classical iteration methods: GaussSeideltype and Jacobitype iterations. We will see that these methods are suitable for error smoothing. The smoothing properties will, however, turn
Mehrgitterverfahren
208
out to be (strongly) dependent on the right choice of relaxation parameters and in the case of GaussSeidel iteration also on the ordering of grid points. All iterative methods which we discuss in this chapter, are pointwise iterations, line or blocktype iterations are not yet considered here. We start our discussion with Jacobitype iterations since the analysis is particularly easy and illustrative. In general, however, appropriate GaussSeideltype iterations turn out to be better smoothers than appropriate Jacobitype iterations. In the following we will speak of Jacobi and GaussSeideltype relaxation methods rather than iteration methods. Relaxation methods: Classical iteration methods such as GaussSeidel type and Jacobi type iterations are often called relaxation methods (or smoothing methods or smoothers) if they are used for the purpose of error smoothing. JacobiType Iteration (Relaxation) For Model Problem 1, the iteration formula of the Jacobi iteration reads
m+1 zh (xi , yj ) =
1 2 h fh (xi , yj ) + um (xi  h, yj ) + um (xi + h, yj ) h h 4 +um (xi , yj  h) + um (xi , yj + h) h h (17.10)
m+1 , um+1 = zh h
(with (xi , yj ) h ) where um denotes the old approximation and um+1 h h the new approximation of the iteration. The corresponding Jacobi iteration operator is h2 Sh = Ih  Lh 4 (where Ih is the identity operator). We can generalize this iteration by introducing a relaxation parameter :
m+1 um+1 = um + (zh  um ) , h h h
(17.11)
Mehrgitterverfahren
209
which is called the (damped) Jacobi relaxation (JAC). Obviously, JAC and Jacobi iteration coincide for = 1. The operator for JAC reads 1 h2 1 Sh () = Ih  (17.12) Lh = 1 4(  1) 1 . 4 4 1 h The convergence properties of JAC can be analyzed easily by considering the eigenfunctions of Sh , which are the same as those of Lh , namely k, (x) = sin kx sin y h ((x, y) h ; (k, = 1, . . . , n  1)) . (17.13)
The corresponding eigenvalues of Sh are k, = k, () = 1  (2  cos kh  cos h) . h h 2
(17.14)
For the spectral radius (Sh ) = max{k,  : (k, = 1, . . . , n  1)}, we obtain h
1,1 for 0 < 1 : (Sh ) = h  = 1  (1  cos h) = 1  0(h2 ) (17.15) else : (Sh ) 1 (for h small enough) .
In particular, when regarding the (unsatisfactory) asymptotic convergence, there is no use in introducing the relaxation parameter: = 1 is the best choice. Smoothing Properties of Jacobi Relaxation The situation is different with respect to the smoothing properties of Jacobi relaxation: In order to achieve reasonable smoothing, we have to introduce a parameter = 1 here. For 0 < 1, we first observe from (17.15) that the smoothest eigenfunction 1,1 is responsible for the slow convergence of Jacobi's method. h Highly oscillating eigenfunctions are reduced much faster if is chosen properly. To see this, we consider the approximations before (wh ) and after (wh ) one relaxation step and expand the errors before (vh ) and after (vh ) the relaxation step, namely vh := uh  wh and v h := uh  wh , into discrete eigenfunction series:
n1 n1
vh =
k, =1
k, k, h
, vh =
k, =1
k, k, k, . h h
(17.16)
Mehrgitterverfahren
210
As motivated in Section 17.1.2, in order to analyze the smoothing properties of Sh () we distinguish between low and high frequencies (with respect to the coarser grid 2h used). In order to measure the smoothing properties of Sh () quantitatively, we introduce the smoothing factor of Sh () as follows: Definition: Smoothing factor (of JAC for Model Problem 1) The smoothing factor µ(h; ) of Sh (), representing the worst factor by which high frequency error components are reduced per relaxation step, and its supremum µ over h, are defined as µ(h; ) := max{k, () : n/2 max(k, ) n  1} , h (17.17) µ () := sup µ(h; ) .
hH
From here on, H denotes the set of admissible (or reasonable) mesh sizes. Below we will see, for example, that for = (0, 1)2 the coarsest grid on which smoothing is applied is characterized by h = 1/4. In this case we would then define H = {h = 1/n : h 1/4}. Inserting (17.14), we get from (17.17) µ(h; ) = max{1  (2  cos kh  cos h) : n/2 max(k, ) n  1} 2 (17.18) µ () = max{1  /2, 1  2} .
This shows that Jacobi's relaxation has no smoothing properties for 0 or > 1: µ(h; ) 1 if 0 or > 1 (and h sufficiently small) .
For 0 < < 1, however, the smoothing factor is smaller than 1 and bounded away from 1, independently of h. For = 1, we have a smoothing factor of 1  0(h2 ) only. In particular, we find from (17.18):
cos h if = 1 µ(h; ) = (2 + cos h)/4 if = 1/2 (1 + 2 cos h)/5 if = 4/5
1 if = 1 µ () = 3/4 if = 1/2 3/5 if = 4/5 . (17.19)
Mehrgitterverfahren
211
The choice = 4/5 is optimal in the following sense: inf {µ () : 0 1} = µ (4/5) = 3/5 . With respect to µ(h, ), one obtains inf {µ(h; ) : 0 1} = µ h; 4 4 + cos h = 3 cos h 3 =  0(h2 ) . 4 + cos h 5
This means that one step of JAC with = 4/5 reduces all high frequency error components at least by a factor of 3/5 (independent of the grid size h). The above consideration is a first example of what we call smoothing analysis. GaussSeidelType Iteration (Relaxation) GaussSeideltype methods represent a particularly important class of smoothers. The smoothing properties of GaussSeidel iterations depend, in general, on relaxation parameters and on the ordering of grid points (numbering of unknowns). With respect to the ordering of grid points, this behavior is substantially different from that of Jacobitype methods, where the ordering is totally irrelevant. In Section 17.1.2 we have introduced GaussSeidel iteration with a lexicographic ordering of the grid points. A different ordering is the socalled redblack ordering. If we use this redblack ordering for GaussSeidel iteration, we obtain the GSRB method. Remark 17.2.1 The redblack ordering of grid points is also called oddeven ordering. This notation has the advantage that the two types of grid points are more clearly addressed (a grid point (xi , yj ) is odd/even if i + j is odd/even) than with the term redblack. Since redblack is more often used in the literature, we nevertheless stay with redblack. The significance of using a relaxation parameter in GaussSeidel iteration is well known in the classical GaussSeidel convergence theory. For Model Problem 1, lexicographic GaussSeidel with a relaxation parameter is described by
m+1 zh (xi , yj ) =
1 2 h fh (xi , yj ) + um+1 (xi  h, yj ) + um (xi + h, yj ) h h 4 +um+1 (xi , yj  h) + um (xi , yj + h) h h
m+1 um+1 = um + (zh  um ) h h h
(17.20)
Mehrgitterverfahren
212
The parameter does not only enter explicitly in (17.20), but also implicitly in the "new values" um+1 (xi  h, yj ) and um+1 (xi , yj  h). We will call this h h algorithm GSLEX in the following. The corresponding algorithm with redblack ordering of the grid points and relaxation parameter is called GSRB in this book. For Model Problem 1, the convergence of GaussSeidel iteration can be substantially improved by an overrelaxation parameter : With = we obtain ( GS) =  1 = instead of (GS) = 1  O(h2 ) (for = 1) . This is the classical result on successive overrelaxation (SOR). GaussSeideltype relaxations are often used for error smoothing. Therefore, in the multigrid context the smoothing properties of GaussSeidel are much more important than the convergence properties. We will, however, not yet analyze the smoothing properties of GaussSeideltype relaxations, here. Since, different from the Jacobi situation, the eigenfunctions of Lh (17.13) are not eigenfunctions of the GaussSeidel operator, we need different tools for this analysis. Here, we summarize the results of the smoothing analysis for GaussSeideltype relaxations. For Model Problem 1, we obtain the smoothing factors µ(GSLEX) = 0.50 µ(GSRB) = 0.25 (for = 1) , (for = 1) . 2 1+ 1 (JAC)2 = 2 1 + sin h
1  sin h = 1  O(h) 1 + sin h
(The factor of 0.25 for GSRB is valid if only one or two smoothing steps are performed.) This result shows that the ordering of the grid points has an essential influence on the smoothing properties in the case of GaussSeidel relaxations. Furthermore, for Model Problem 1, the introduction of a relaxation parameter does not improve the smoothing properties of GSLEX relaxation essentially. The situation is different for GSRB, for which an overrelaxation parameter can both improve the smoothing properties and the convergence.
Mehrgitterverfahren Parallel Properties of Smoothers
213
Here, we compare the smoothing properties and the parallel features of JAC, GSLEX and GSRB. First of all, JAC is "fully h parallel". By this we mean that the Jacobi operator (17.12) can be applied to all grid points h simultaneously; the new values do not depend on each other. We also say that the degree of parallelism "pardeg" is pardeg (JAC) = #h . If we use GSLEX instead, we have dependences since we want to use the most recent values of uh wherever possible. Grid points lying on a diagonal in h (see Figure 17.4) are independent of each other in case of 5point discretizations and can be treated in parallel. The degree of parallelism here
Abbildung 17.4: Diagonal grid points such as the · (or the ) can be treated in parallel in LEX GS. Going from one diagonal to the next () is sequential.
Abbildung 17.5: Redblack distribution of grid points in h
clearly varies from one diagonal to the next and is restricted by pardeg (GSLEX ) (#h ) 2 . In case of GSRB each step of GaussSeidel relaxation consists of two half steps: In the first half step, all red grid points () are treated simultaneously and independently (see Figure 17.5). In the second half step, all black grid
1
Mehrgitterverfahren
214
points (·) are treated, using the updated values in the red points. The degree of parallelism is 1 pardeg (GSRB) = #h . 2 Table 17.1 summarizes the properties of these relaxation schemes for Model Problem 1: JAC is fully parallel, but unfortunately not a really good smoother (not even with an optimized parameter ) whereas GSLEX has reasonable smoothing properties but is not satisfactorily parallel. However, GSRB is both, a very good smoother (much better than JAC) and highly parallel. Relaxation JAC, = 1 JAC, = 0.5 JAC, = 0.8 GSLEX GSRB Smoothing Factor 1 0.75 0.6 0.5 0.25 Smoothing no unsatisfactory acceptable good very good Parallel Degree N N N N full full full square root half
1 2N
Tabelle 17.1: Smoothing factors for various relaxation schemes; the smoothing factors marked with will be obtained from the analysis given below; the factor marked with is only valid if at most one or two smoothing steps are performed. (Here, N denotes the number of grid points #h corresponding to the unknown grid values of uh .)
17.2.2
Introducing the TwoGrid Cycle
As already pointed out, basic multigrid consists of two ingredients: smoothing and coarse grid correction. We start with the twogrid cycle, the natural basis for any multigrid algorithm. For this purpose, we consider a discrete linear elliptic boundary value problem of the form Lh uh = fh (h ) . (17.21)
and assume Lh 1 to exist. As we are not going to give concrete quantitative results in this section, we do not make precise assumptions on the discrete operator Lh , the righthand side fh nor the grid h . For a simple but characteristic example, one may always think of Model Problem 1 or some more general Poissonlike equation.
Mehrgitterverfahren Iteration by Approximate Solution of the Defect Equation
215
One way of introducing the twogrid idea is to start from a general iteration based on an approximate solution of the defect equation. For any approximation um of the solution uh of (17.21), we denote the h error by m vh := uh  um (17.22) h and the defect (or residual) by dm := fh  Lh um h h Trivially, the defect equation
m Lh vh = dm h
(17.23)
(17.24)
is equivalent to the original equation (17.21) since
m uh = um + vh . h
(17.25)
This leads to the procedure
m m um  dm = fh  Lh um  Lh vh = dm  uh = um + vh . h h h h h
(17.26)
This procedure, however, is not a meaningful numerical process, but rather a "procedural formulation" corresponding to the original equation Lh uh = fh . However, if Lh is approximated here by any "simpler" operator Lh such that m L1 exists, the solution vh of h
m Lh vh = dm h
(17.27)
gives a new approximation
m um+1 := um + vh . h h
(17.28)
The procedural formulation then looks like
m m um  dm = fh  Lh um  Lh vh = dm  um+1 = um + vh . h h h h h h (17.29) 0 , the successive application of this process defines an Starting with some uh iterative procedure. The iteration operator of this method is given by
Mh = Ih  Ch Lh :
G(h ) G(h ),
(17.30)
Mehrgitterverfahren where Ch := (Lh )1 and Ih denotes the identity on G(h ). We have um+1 = Mh um + sh with sh = (L)1 fh h h h For the errors, it follows that
m+1 m vh = Mh vh = m Ih  Ch Lh vh
216
(m = 0, 1, 2, . . .) .
(17.31)
(m = 0, 1, 2, . . .)
(17.32)
and for the defects that dm+1 = Lh Mh Lh 1 dm = h h Ih  Lh Ch dm h (m = 0, 1, 2, . . .) . (17.33)
If we start the iteration with u0 = 0, then we can represent um (m = 1, 2, . . .) h as um = h Ih + Mh + Mh 2 + · · · + Mh m1 (Lh )1 fh (17.34)
1
= (Ih  Mh m )(Ih  Mh )1 (Lh )1 fh = Ih  Mh m Lh fh .
The asymptotic convergence properties of the above iterative process are characterized by the spectral radius (asymptotic convergence factor) of the iteration operator, i.e. (Mh ) = (Ih  Ch Lh ) = (Ih  Lh Ch ) . If some norm · on G(h ) is introduced, Ih  Ch Lh , Ih  Lh Ch (17.36) (17.35)
give the error reduction factor and the defect reduction factor, respectively, for one iteration. Remark 17.2.2 Classical iterative linear solvers such as Jacobi or GaussSeidel iteration if applied to (17.21) can be interpreted as (iterated) approximate solvers for the defect equation. For JAC we have, for example, 1 Lh = Dh where Dh is the "diagonal" part of the matrix corresponding to Lh . Similarly, GSLEX is obtained by setting Lh to be the "lower triangular" part of the matrix corresponding to Lh including its diagonal part. Remark 17.2.3 More generally, any of the classical iterative linear solvers of the form (17.31) can be interpreted as iterated approximate solvers for the defect equation if Ch := (Ih  Mh )Lh 1 is invertible. For then we can set Lh := Ch 1 .
Mehrgitterverfahren Coarse Grid Correction
217
One idea to approximately solve the defect equation is to use an appropriate approximation LH of Lh on a coarser grid H , for instance the grid with mesh size H = 2h. This means that the defect equation (17.24) is replaced by m LH vH = dm . (17.37) H Here we assume LH : G(H ) G(H ), dim G(H ) < dim G(h ) (17.38)
m and LH 1 to exist. As dm and vH are grid functions on the coarser grid H , H we assume two (linear) transfer operators H Ih : G(h ) G(H ), h IH : G(H ) G(h )
(17.39)
H to be given. Ih is used to restrict dm to H : h H dm := Ih dm , H h
(17.40)
m h and IH is used to interpolate (or prolongate) the correction vH to h : h m m vh := IH vH .
(17.41)
The simplest example for a restriction operator is the "injection" operator
H dH (P ) = Ih dh (P ) := dh (P )
for
P H h ,
(17.42)
which identifies approximations at coarse grid points with the corresponding approximations at fine grid points. A fine and a coarse grid with the injection operator are presented in Figure 17.6. Altogether, one coarse grid correction step (calculating um+1 from um ) proceeds as follows: h h Coarse grid correction um um+1 h h Compute the defect Restrict the defect (finetocoarse transfer) Solve exactly on H Interpolate the correction (coarsetofine transfer) Compute a new approximation dm = fh  Lh um h h
H dm = Ih dm H h m LH vH = dm H m h m vh = IH vH m um+1 = um + vh h h
Mehrgitterverfahren
218
h
h
Abbildung 17.6: A fine and a coarse grid with the injection operator. The associated iteration operator is given by Ih  Ch Lh However: Remark 17.2.4 Taken on its own, the coarse grid correction process is of no use: It is not convergent! We have
h H Ih  IH LH 1 Ih Lh
with
h H Ch = IH LH 1 Ih .
(17.43)
1 .
(17.44)
H This follows directly from the fact that Ih maps G(h ) into the lower dih H mensional space G(H ) and therefore Ch = IH LH 1 Ih is not invertible. This implies that
Ch Lh vh = 0
for certain
vh = 0 .
Example 17.2 It may be illustrative to see what Ch Lh vh = 0 means in practice. For the simple injection operator (17.42), for example, any error function vh G(h ) with 0 for P H Lh vh (P ) = arbitrary for P
H
Mehrgitterverfahren
219
H is annihilated by Ih and therefore by Ch . Such an error function vh will thus not be changed by a pure coarse grid correction. As a more concrete example, consider Model Problem 1 (Lh = h ), h = 1/n, and n n vh (x, y) = sin x sin y . (17.45) 2 2 For standard coarsening, we find 2h vh (P ) = Lh vh (P ) = Ih Lh vh (P ) = 0 for all P 2h . 2h (This relation holds for any restriction operator Ih as long as the stencil of 2h is symmetric in x and y.) Clearly, v belongs to the high frequency part Ih h of the eigenfunctions of Lh .
Structure of the TwoGrid Operator The above considerations imply that it is necessary to combine the two processes of smoothing and of coarse grid correction. In general, the interpolation of coarse grid corrections reintroduces high frequency error components on the fine grid. One natural approach to reduce them is to introduce one or a few additional smoothing sweeps after the coarse grid correction. These sweeps are commonly denoted as postsmoothing. Each iteration step (cycle) of a twogrid method consists of a presmoothing, a coarse grid correction and a postsmoothing part. One step of such an iterative twogrid (h, H)method (calculating um+1 from um ) proceeds as follows: h h Twogrid cycle um+1 = TGCYC um , Lh , fh , 1 , 2 h h (1) Presmoothing Compute um by applying 1 ( 0) steps of a given smoothing h procedure to um : h um = SMOOTH1 um , Lh , fh h h (2) Coarse grid correction (CGC) Compute the defect Restrict the defect (finetocoarse transfer) dh = fh  Lh um . h
H dH = Ih dh . m m m
.
(17.46)
Mehrgitterverfahren Solve on H Interpolate the correction (coarsetofine transfer) Compute the corrected approximation (3) Postsmoothing
m LH vH = dH . m
220
(17.47)
m h m vh = IH vH . m um,after CGC = um + vh . h h
Compute um+1 by applying 2 ( 0) steps of the given smoothing h procedure to um,after CGC : h um+1 = SMOOTH2 um,after CGC , Lh , fh h h . (17.48)
For the formal description of smoothing procedures we have used the above notation um = SMOOTH um , Lh , fh . h h With this notation we try to combine the advantages of a more mathematically oriented operatorlike notation and a more computer science oriented formal procedural notation. In particular, the number of smoothing steps appears as an upper (power) index. Similar notation is also used for the twogrid and for the multigrid procedure. The twogrid cycle is illustrated in Figure 17.7. um h
 um h
SMOOTH1 H Ih
 dm = fh  Lh um h h
m vh
um + v m
h h
6
h 2 SMOOTH
 m+1 u
h IH
?
dH
m

m LH vH = dH
m
Abbildung 17.7: Structure of an (h, H) twogrid method From the above description, one immediately obtains the iteration opeH rator Mh of the (h, H) twogrid cycle:
H H M h = Sh 2 K h S h 1
with
H h H Kh := Ih  IH LH 1 Ih Lh .
(17.49)
From Figure 17.7, we see that the following individual components of the (h, H)cycle have to be specified:
Mehrgitterverfahren · · · · · · the the the the the the smoothing procedure SMOOTH (um , Lh , fh ) h numbers 1 , 2 of smoothing steps, coarse grid H , H finetocoarse restriction operator Ih , coarse grid operator LH , h coarsetofine interpolation operator IH .
221
Experience with multigrid methods (and multigrid theory) shows that the choice of these components may on the one hand have a strong influence on the efficiency of the resulting algorithms. On the other hand, there seem to be no general simple rules on how to choose the individual components in order to construct optimal algorithms for complicated applications. One can, however, recommend certain choices for certain situations. The main objective of the elementary multigrid theory and the local Fourier analysis is to analyze the convergence properties of multigrid theoretically.
17.2.3
Multigrid Components
In this section, we will introduce and list some important examples of how some of the multigrid components can be specified, i.e. · · · · coarse grid specification H , H transfer operator: restriction Ih , h transfer operator: interpolation IH and coarse grid operator LH .
All examples refer to the 2D case. The idea of giving these specifications is to make our presentation more concretely and to introduce corresponding notations. The multigrid components specified here are needed in Subsection 17.2.5, where a specific multigrid Poisson solver is introduced. Therefore, readers who are more interested in the general structure of multigrid than in specific details may skip this section for the time being. Choices of Coarse Grids In this subsection, we will mention some possible and common choices for the grid H . The simplest and most frequently used choice is standard coarsening, doubling the mesh size h in every direction. Most of the results and considerations in this book refer to this choice. In d dimensions, the relation between the number of grid points (neglecting boundary effects) is #H 1 #h . 2d
Mehrgitterverfahren
222
We speak of semicoarsening in 2D if the mesh size h is doubled in one direction only, i.e. H = (2hx , hy ) (xsemicoarsening, see Figure 17.8) or H = (hx , 2hy ) (ysemicoarsening). This is especially of interest for anisotropic operators. Note that in this case 1 #H #h . 2 (17.50)
In 3D, we have additional types of semicoarsening: we can double the mesh size in one or in two directions). We speak of redblack coarsening if the coarse grid points are distributed in the fine grid in a redblack (checkerboard) manner (see Figure 17.8). Also other coarsenings, like 4hcoarsening, are sometimes of interest.
Abbildung 17.8: Examples of standard, xsemi, redblack and h4h coarsening in a square computational domain . The grid points of H are marked by dots. We finally mention that in the context of the Algebraic Multigrid approach (AMG), the coarse grid H is not formed according to such a fixed simple strategy. Using the algebraic relations in the corresponding matrix, H is determined by the AMG process itself in the course of calculation. We will see that the redblack coarsened grid as H is a standard choice for Model Problem 1 in AMG. Choice of the Coarse Grid Operator Up to now, we have not yet described precisely how the coarse grid operator LH can be chosen. A natural choice is to use the direct analog of Lh on the grid H . For Model Problem 1, this means 0 1 0 1 LH uH = 2 1 4 1 . H 0 1 0
H
Mehrgitterverfahren
223
The direct coarse grid analog of the fine grid operator Lh will be used in most parts of this book, in particular in all chapters on basic multigrid. There are, however, applications and multigrid algorithms, which make use of a different approach. The socalled Galerkin coarse grid operator is defined by H h LH := Ih Lh IH , (17.51)
H h where Ih and IH are appropriate transfer operators. We will return to the Galerkin approach in the context of problems with discontinuous coefficients and in the context of algebraic systems of equations without a gridoriented background. In the latter situation, the natural approach is not available.
Transfer Operators: Restriction
h H The choice of restriction and interpolation operators Ih and IH , for the intergrid transfer of grid functions, is closely related to the choice of the coarse grid. Here, we introduce transfer operators in the case of standard coarsening (see Figure 17.8), i.e. the grid transfers between the grid h and the 2hgrid 2h . 2h A restriction operator Ih maps hgrid functions to 2hgrid functions. One restriction operator already discussed is the injection operator (17.42). Another frequently used restriction operator is the Full Weighting operator (FW), which in stencil notation reads
1 16
1 2 1
2h
h
2 4 2 1 2 1
.
(17.52)
Applying this operator to a grid function dh (x, y) at a coarse grid point (x, y) 2h means
2h d2h (x, y) = Ih dh (x, y) 1 4dh (x, y) + 2dh (x + h, y) + 2dh (x  h, y) = 16 + 2dh (x, y + h) + 2dh (x, y  h)
(17.53)
+ dh (x + h, y + h) + dh (x + h, y  h) + dh (x  h, y + h) + dh (x  h, y  h) . Obviously, a 9point weighted average of dh is obtained.
Mehrgitterverfahren
224
Remark 17.2.5 The Full Weighting operator can be derived from the condition 2h dh (x, y)dxdy = (Ih dh )(x, y)dxdy (17.54)
x,y x,y
for x,y = [x  h, x + h] × [y  h, y + h] where the midpoint rule is used to approximate the integral on the righthand side of the equation and the trapezoidal rule is used to approximate the integral on the lefthand side.
Remark 17.2.6 The Full Weighting operator in d dimensions can also be constructed as a tensor product of the 1D Full Weighting operators 1 1 1 2h 2h Ih = Ih = 2 . 1 2 1 4 4 1 These 1D restriction operators are of particular interest in combination with the semicoarsening approach discussed in the previous section. They are also commonly used in case of noneliminated boundary conditions. A third restriction operator is Half Weighting: 1 8 0 1 0 2h
h
1 4 1 0 1 0
.
(17.55)
Transfer Operators: Interpolation The interpolation (prolongation) operators map 2hgrid functions into h grid functions. A very frequently used interpolation method is bilinear in
Mehrgitterverfahren terpolation from G2h to Gh , which is given by: v (x, y) 2h 1 2 v2h (x, y + h) + v2h (x, y  h) 1 h I2h v2h (x, y) = v (x + h, y) + v2h (x  h, y) 2 2h 1 4 v2h (x + h, y + h) + v2h (x + h, y  h) +v2h (x  h, y + h) + v2h (x  h, y  h)
225
for y for for
for
(17.56) Figure 17.9 presents (part of) a fine grid with the symbols for the fine and coarse grid points referred to by formula (17.56).
2h
y
2h
x
Abbildung 17.9: A fine grid with symbols indicating the bilinear interpolation (17.56) used for the transfer from the coarse grid (·). Remark 17.2.7 Linear interpolation in d dimensions can easily be constructed by an iterative procedure (over the d dimensions) of 1D linear interpolations. Also ddimensional higher order interpolations can be constructed and programmed very efficiently in this way.
Mehrgitterverfahren In stencil notation we write the bilinear as 1 2 1 h I2h = 2 4 4 1 2
226
h interpolation operator I2h (17.56)
1
h (17.57)
2 1
2h
In this notation, the stencil entries correspond to weights in a distribution process as illustrated in Figure 17.10. Therefore, the brackets are reversed.
1
2 1
1
2
1
2
1
4
4
1
2
1
2
1
1 4
4
1
2
1
2
1
2
Abbildung 17.10: The distribution process for the bilinear interpolation operator; : Gh \G2h , · : G2h grid.
Mehrgitterverfahren For more general interpolation operators the stencils read . . . h
227
h I2h = ]t1 ,2 [h 2h
. . . . . t1,1 t0,1 t1,1 . . := . . t1,0 t0,0 t1,0 . . . . t1,1 t0,1 t1,1 . . . . . . . .
.
(17.58)
2h
Again, only a finite number of coefficients t1 ,2 is assumed to be nonzero. The meaning of this stencil terminology is that coarse grid values are distributed to the fine grid and the weights in this distribution process are the factors t1 ,2 . Remark 17.2.8 The formulae for linear interpolation can be applied near Dirichlet boundaries even if the boundary conditions have been eliminated. The corrections from a boundary point are assumed to be 0 in this case.
17.2.4
The Multigrid Cycle
In Section 17.2.2 we have described the multigrid principle only in its twogrid version. In the multigrid context, twogrid methods are of little practical significance (due to the still large complexity of the coarse grid problem). However, they serve as the basis for the multi grid method. Remark 17.2.9 Methods involving only two grids (or a fixed small number of grids) are of some interest in other frameworks, e.g. in the context of certain domain decomposition and related methods. From twogrid to multigrid: The multigrid idea starts from the observation that in a well convergent twogrid method it is neither useful nor necessary to solve the coarse grid defect equation (17.47) exactly. Instead, without essential loss of convergence m speed, one may replace vH by a suitable approximation. A natural way to obtain such an approximation is to apply the twogrid idea to the Equation (17.47) again, now employing an even coarser grid than H .
Mehrgitterverfahren
228
This is possible, as obviously the coarse grid equation (17.47) is of the same form as the original equation (17.21). If the convergence factor of the twogrid method is small enough, it is sufficient to perform only a few, say (see Figure 17.11), twogrid iteration steps to obtain a good enough approximation to the solution of (17.47). This idea can, in a straightforward manner, be applied recursively, using coarser and coarser grids, down to some coarsest grid. On this coarsest grid any solution method may be used (e.g. a direct method or some relaxation type method if it has sufficiently good convergence properties on that coarsest grid). In ideal cases, the coarsest grid consists of just one grid point. On a very coarse grid, classical iterative solvers are usually satisfactory solvers; their bad convergence properties just appear for h 0. Sequences of Grids and Operators Figure 17.11 illustrates the structure of one iteration step (cycle) of a multigrid method with a few pictures. Usually, the cases = 1 and = 2 are particularly interesting. For obvious reasons, we refer to the case = 1 as Vcycles and to = 2 as Wcycles. The number is also called cycle index.
Recursive Definition For a formal description of multigrid methods we now use a sequence of coarser and coarser grids hk , characterized by a sequence of mesh sizes hk : h , h 1 , · · · , h0 . The coarsest grid is characterized by the mesh size h0 (index 0), whereas the index corresponds to the given finest grid h : h = h . For simplicity, we replace the index hk by k (for grids, grid functions and grid operators) in the following. For each k , we assume that linear operators Lk : G(k ) G(k ), Sk : G(k ) G(k ), (17.1)
k1 k Ik : G(k ) G(k1 ), Ik1 : G(k1 ) G(k )
are given, where Lk uk = fk (k ) (17.2)
Mehrgitterverfahren
twogrid method: threegrid method:
229
=1 fourgrid method:
=2
=3
=1 fivegrid method:
=2
=2
Abbildung 17.11: Structure of one multigrid cycle for different numbers of grids and different values of the cycle index (· stands for smoothing, for exact solution, \ for finetocoarse and / for coarsetofine transfer). are discretizations of Lu = f on k for k = ,  1, . . . , 0. The operators Sk denote the linear iteration operators corresponding to given smoothing methods on k . As in the description of the twogrid cycle, performing smoothing steps (applied to the discrete problem Lk uk = fk with initial approximation wk ) resulting in the approximation wk will also be denoted by wk = SMOOTH (wk , Lk , fk ) . We now describe a multigrid cycle (multigrid iteration, MGI)  more precisely an ( +1)grid cycle  to solve the system of difference equations (17.2) for a fixed 1. Using the operators Lk (k = ,  1, . . . , 0) as well as k1 k Sk , Ik , Ik1 (k = ,  1, . . . , 1), assuming the parameters 1 , 2 (the num
Mehrgitterverfahren
230
ber of pre and postsmoothing iterations) and to be fixed and starting on the finest grid k = , the calculation of a new iterate um+1 from a given k approximation um to the solution uk proceeds as follows: k Multigrid cycle um+1 = MGCYC(k, , um , Lk , fk , 1 , 2 ) k k (1) Presmoothing Compute um by applying 1 ( 0) smoothing steps to um k k um = SMOOTH1 (um , Lk , fk ) . k k (2) Coarse grid correction Compute the defect Restrict the defect Compute an approximate solution
m vk1 of the defect equation on k1 m Lk1 vk1 = dk1 m
dk = fk  Lk um . k
k1 dk1 = Ik dk m m
m
.
(17.3)
by If k = 1, use a direct or fast iterative solver for (17.3). If k > 1, solve (17.3) approximately by performing ( 1) kgrid cycles using the zero grid function as a first approximation
m vk1 = MGCYC (k  1, , 0, Lk1 , dk1 , 1 , 2 ) . m k m vk = Ik1 vk1 . m
(17.4)
Interpolate the correction Compute the corrected approximation on k
m um,after CGC = um + vk . k k
(3) Postsmoothing Compute um+1 by applying 2 ( 0) smoothing steps to um,after CGC : k k um+1 = SMOOTH2 (um,after CGC , Lk , fk ) . k k
Mehrgitterverfahren
231
In (17.4) the parameter appears twice: once (as an argument of MGCYC) for the indication of the cycle type to be employed on coarser levels and once (as a power) to specify the number of cycles to be carried out on the current coarse grid level. Remark 17.2.10 Since on coarse grids we deal with corrections to the fine grid approximation, this multigrid scheme is also called the correction scheme (CS). As we will see below, it is also possible to work with full approximations of the fine grid solution on coarse grids. That multigrid scheme is called full approximation scheme (FAS). Let Mk denote the iteration operator of the multigrid method described in the previous algorithm. The multigrid iteration operator Mk is given by the following recursion:
M0 = 0
k1 k Mk = Sk 2 Ik  Ik1 (Ik1  (Mk1 ) )(Lk1 )1 Ik Lk
S k 1 (17.5)
(k = 1, . . . , ) .
An explicit proof of (17.5) can easily be given by means of induction on k. Implicitly, a proof is also contained in the following remark. Remark 17.2.11 The difference between the (hk , hk1 ) twogrid iteration operator k1 k1 k (17.6) Mk = Sk 2 Ik  Ik1 (Lk1 )1 Ik Lk Sk 1 , and the above multigrid iteration operator Mk is obviously that (Lk1 )1 is replaced by (Ik1  (Mk1 ) )(Lk1 )1 . (17.7)
This reflects the fact that the coarse grid equation (17.3) is solved approximately by multigrid steps on the grid k1 starting with zero initial approximation (compare with (17.30)). Here we use the simple consideration: If any nonsingular system Au = f is approximately solved by steps of an iterative method, um+1 = M um + s, with u0 = 0, then the th iterate can be written as u = (I  M )A1 f .
Mehrgitterverfahren
232
Remark 17.2.12 (Fcycle) It is convenient, but not necessary, that the parameters 1 , 2 and the cycle index are fixed numbers. In particular, may depend on k. Certain combinations of = 1 and = 2 are indeed used in practice. We mention here only the socalled Fcycle which is illustrated in Figure 17.12. Its coarse grid correction consists of an Fcycle on the coarser grid followed by a Vcycle on the coarser grid. The corresponding iteration F operator M F is recursively defined by M1 = M1 (as in (17.5)) and
k1 F k V F Mk = Sk 2 Ik  Ik1 (Ik1  Mk1 Mk1 )(Lk1 )1 Ik Lk Sk 1
(k = 2, . . . , ) .
V Here Mk1 is the corresponding V cycle iteration operator (i.e. (17.5) with = 1 and k  1 instead of k).
l=1
l=2
l=3
l=4
Abbildung 17.12: Structure of an F cycle Remark 17.2.13 In selfadaptive algorithms, variable cycles are used: Switching from one grid to another (to a finer or a coarser one) is controlled by suitable accommodative criteria. Computational Work In Section 17.2.5, we will see that we can achieve a convergence factor for multigrid independent of the mesh size. The convergence does not depend
Mehrgitterverfahren
233
on the size of the finest grid. But the fact that a certain method has an hindependent convergence factor says nothing about its efficiency as long as the computational work is not taken into account. In the following, we will estimate the computational work of a multigrid method. From the recursive definition of a multigrid cycle as given in Theorem 17.2.4 it immediately follows that the computational work W per multigrid cycle is recursively given by
0 W1 = W1 + W0 , k Wk+1 = Wk+1 + k Wk
(k = 1, . . . ,  1) .
(17.8)
k Here Wk+1 denotes the computational work of one (hk+1 , hk ) twogrid cycle excluding the work needed to solve the defect equations on k , and W0 denotes the work needed to compute the exact solution on the coarsest grid 0 . By "computational work", we always denote some reasonable measure, typically the number of arithmetic operations needed. If is independent of k, we obtain from (17.8)
W =
k=1
k
k1 Wk +
1
W0
( 1) .
(17.9)
Let us first discuss the case of standard coarsening in 2D with independent of k. Obviously, we have in this case Nk =4Nk1 (k = 1, 2, . . . , ) (17.10)
where Nk = k  (number of grid points on k ) and "=" means equality up to lower order terms (boundary effects). Furthermore, we assume that the multigrid components (relaxation, computation of defects, finetocoarse and coarsetofine transfer) require a number of arithmetic operations per point of the respective grids which is bounded by a constant C, independent of k: k1 Wk CNk (k = 1, 2, . . . , ) . (17.11) (As above, "" means "" up to lower order terms.) This is a typical feature of multigrid methods. In particular, (17.11) is satisfied with = instead of if all multigrid components are constructed in the same way on all grids. Under these assumptions, one obtains the following estimate for the total
Mehrgitterverfahren
234
computational work W of one complete multigrid cycle in 2D from (17.9): 4 CN 3 2CN W 4CN O(N log N ) (for = 1) (for = 2) (for = 3) (for = 4) . (17.12)
For = 4 the total computational work on each level is essentially constant (up to lower order terms) and the number of grid levels is O(log N ). Summary: This estimate of W shows that the number of arithmetic operations needed for one 2D multigrid cycle is proportional to the number of grid points of the finest grid for 3 and standard coarsening (under quite natural assumptions which are satisfied for reasonable multigrid methods). Together with the hindependent convergence, this means that multigrid methods achieve a fixed reduction (independent of h) of the defect in O(N ) operations. The constant of proportionality depends on the type of the cycle, i.e. on , the type of coarsening and the other multigrid components. For reasonable choices of these components, the constants of proportionality are small.
Remark 17.2.14 In practice, it is not necessary to choose a grid consisting of only one interior point as the coarsest grid. Instead, it is sufficient to choose the coarsest grid W0 such that the amount of work (of the corresponding solution method) on W0 is negligible. This assumption is also used in the derivation of the above results.
k1 Remark 17.2.15 Wk in (17.8) is determined by the computational work needed for the individual multigrid components of the (hk , hk1 ) twogrid method, namely k1 Wk =(w0 + w1 + w2 )Nk . (17.13)
Here = 1 + 2 is the number of smoothing steps used, w0 , w1 and w2 are measures for the computational work per grid point of k needed for the single components, namely w0 : one smoothing step on k , w1 : computation of the defect and its transfer to k1 ,
Mehrgitterverfahren
235
w2 : interpolation of the correction to k and its addition to the previous approximation. Usually (in particular, if the multigrid components are constructed in the same way on all grids), w0 , w1 and w2 are independent of k. They may, however, depend on k. More generally, in particular for other than standard coarsening, we can assume Nk = Nk1 (k = 1, 2, . . . , ) with > 1 . In that case we obtain for independent of k CN  W= O(N log N ) (for < ) , (for = ) (17.14)
generalizing (17.12). If we consider, for example, redblack coarsening or semicoarsening (see Figure 17.8), we have = 2. In this case, we already see that Wcycles do not yield an asymptotically optimal multigrid method: For = 2 and fixed , only = 1 yields a cycle for which W is proportional to N . Remark 17.2.16 The computational work of an Fcycle as introduced in Remark 17.2.12 in combination with standard coarsening ( = 4) is also O(N ) since this cycle is more expensive than the Vcycle and less expensive than the Wcycle. In particular, we obtain
WF = W and thus
1 k=1
k
1 4
k1
=W
1 k=1 m=k
1 4
m1
=
16 W 9
1
(17.15)
WF
16 CN . 9
Remark 17.2.17 For 2D semicoarsening ( = 2) we find that the computational work of an Fcycle still is of the form O(N ). This can be seen if we take into account that a grid k is processed once more often than grid k+1 (see Figure 17.12). Correspondingly, if W 1 is the amount of work
Mehrgitterverfahren
236
spent on the finest grid, then the asymptotical amount of work (for ) of the Fcycle in case of semicoarsening is
WF
= W
1 k=1
k
1 2
k1
= 4W
1
.
So far, we have estimated and discussed the computational work of multigrid cycles. In order to assess the numerical efficiency of such an iterative multigrid solver precisely, it is necessary to take into account both its convergence behavior and the computational effort per iteration step (cycle). For example, in order to decide how many relaxation steps = 1 + 2 are appropriate per cycle and whether a V, F or Wcycle should be used, the effect of this decision on both the convergence speed and the computational work have to be analyzed. We will discuss these efficiency questions in the following section for a specific multigrid Poisson solver.
17.2.5
Multigrid Convergence and Efficiency
In this section we introduce the first specific multigrid algorithm. For that purpose we return to Model Problem 1, the discrete Poisson equation with Dirichlet boundary conditions in the 2D unit square. The algorithm presented here is a highly efficient Poisson solver. Its most characteristic multigrid component is the RedBlack GaussSeidel (GSRB) relaxation for smoothing. We call it the RedBlack Multigrid Poisson Solver. In addition to its numerical efficiency, the algorithm is also highly parallel. An Efficient 2D Multigrid Poisson Solver The algorithm is characterized by the following multigrid components. In this characterization certain parameters and components still have to be specified: 1 Lk = Lhk = hk = 1 2 1 4 1 , hk 1 smoother: Red Black GaussSeidel relaxation as presented in Section 17.2.1, k1 restriction Ik : Full Weighting (17.53) of defects,
hk
Mehrgitterverfahren
k prolongation Ik1 : bilinear interpolation (17.56), standard coarsening: hk+1 = hk /2, size of coarsest grid: h0 = 1/2.
237
Remark 17.2.18 This version of the RedBlack Multigrid Poisson Solver is a particularly simple one. There are further variants of this algorithm, some of which are even more efficient, but less generally applicable. For example, the restriction operator Full Weighting can be replaced by Half Weighting (17.55) which is more efficient for certain choices of ( 3, see Table 17.4. In the following we want to discuss the influence which the number of relaxations = 1 +2 and the cycle type (V, F, W) have on the convergence speed of the algorithm and consequently for the numerical efficiency of the algorithm. We use the notation V(1 , 2 ), F(1 , 2 ) or W(1 , 2 ) to indicate the cycle type and the number of pre and postsmoothing iteration steps employed. The finest grid is h = 1/256. On this fine grid, we can observe the excellent convergence of multigrid. Furthermore, we can compare theoretical convergence results for multigrid for h tending to 0 with the numerical convergence results observed here. In Figure 17.13 the multigrid convergence of the V(0,1)cycle (meaning 0 pre and 1 postsmoothing, i.e. 1 = 0, 2 = 1), of the V(1,1), the W(0,1) and the W(1,1)cycle is presented, using FW as the restriction operator. The l2 norm of the defect is plotted in a log scale along the yaxis. The xaxis shows the number of multigrid iterations. (The results obtained by the Fcycle and by the Wcycle are nearly identical.) From Figure 17.13, we observe rapid convergence of multigrid, especially for the V(1,1) and W(1,1)cycles: they reduce the defect by a factor of 1012 within 12 multigrid iterations. Also the benefits of processing the coarse grid levels more frequently can be seen: The Wcycle (and the Fcycle) shows a better performance (versus the number of iterations) than the Vcycle. Remark 17.2.19 In practice, it is usually not necessary to reduce the defect by a factor of 1012 . Convergence to discretization accuracy O(h2 ) is sufficient in most cases, and of course obtained much faster. Here, we reduce the defect further in order to estimate the asymptotic convergence of the multigrid cycle. How to Measure the Multigrid Convergence Factor in Practice In order to construct, evaluate and analyze a multigrid iteration one often wants to determine its convergence factor empirically (by measurement).
Mehrgitterverfahren
238
100000
V(0,1) cycle V(1,1) cycle W(0,1) cycle W(1,1) cycle
1 defect 0.01 0.0001 1e06 1e08 0 5 10 15 iteration 20 25 30
Abbildung 17.13: The convergence of different multigrid cycles for Model Problem 1. In general, the only quantities that are available for the determination of are the defects dm (m = 1, 2, . . .). We can measure, for example, h q (m) = or q (m) =
m
dm h dm1 h dm h d0 h
(17.16)
q (m) q (m1) . . . q (1) =
m
(17.17)
in some appropriate norm, say the discrete · 2 norm. The quantity q (m) represents an average defect reduction factor over m iterations. For "sufficiently general" d0 = 0 we have h q (m)  .
Mehrgitterverfahren
239
q (m) is a good estimate for if m is sufficiently large. In many cases the convergence history is also of interest. This can probably be best represented by graphics as in Figure 17.13 or by a table of the values of q (m) . Often the first few iteration steps do not reflect the asymptotic convergence behavior of the multigrid iteration. Then one may redefine q (m) as q (m) = mm0 dm /dm0 with a small number m0 , typically between 2 and 5. h h In Table 17.2 we present the values of q (m) and q (m) from the convergence for Model Problem 1 in Figure 17.13 for the given cycles. V(0,1): V(1,1): W(0,1): W(1,1): q (26) = 0.343 q (12) = 0.101 q (21) = 0.243 q (11) = 0.063 q (26) =0.333 q (12) =0.089 q (21) =0.238 q (11) =0.060
Tabelle 17.2: The quantities q (m) and q (m) as a measure for the convergence of the RedBlack Multigrid Poisson Solver (with FW) for Model Problem 1 (m0 = 0)
Remark 17.2.20 When performing too many cycles, machine accuracy might limit the measurements substantially. In order to be able to perform sufficiently many iteration steps to see the asymptotic convergence behavior, it is recommendable to consider the homogeneous problem (f = 0) with discrete solution uh 0 and to start with an initial guess u0 that is sufficiently large and general. In the case of our example, we then find asymptotic convergence rates of 0.25 for the F(0,1) or W(0,1)cycle and of 0.074 for the F(1,1) or W(1,1)cycle after a large number of multigrid iterations. hindependent convergence of multigrid: The numerically measured convergence of the RedBlackMultigrid Poisson Solver is essentially independent of the size of the finest grid in the multigrid cycle. This behavior is demonstrated by the results in Table 17.3. Later on, we will see that this behavior is also predicted by multigrid theory.
Mehrgitterverfahren Cycle V(1,1): F(1,1): W(1,1): h = 1/512 0.10 0.063 0.063 h = 1/256 0.10 0.063 0.063 h = 1/128 0.10 0.063 0.063 h = 1/64 0.10 0.063 0.063 h = 1/32 0.10 0.063 0.063
240 h = 1/16 0.10 0.063 0.063
Tabelle 17.3: Measured convergence factors of the RedBlack Multigrid Poisson Solver (with FW) for Model Problem 1 on grids of different mesh size; in each case, the coarsest grid has a mesh size of h0 = 1/2. Numerical Efficiency In order to choose the most efficient multigrid solver, it is necessary to look at the costs of the different cycles. In practice, the time needed to solve the problem is much more important than convergence factors. In Table 17.4, the wall clock times for different multigrid (cycle) iterations are presented. The iterations stopped after initial defects had been reduced by a factor of 1012 . The times were measured on a single workstation. Table 17.4 throws a different light on the convergence results obtained before. The V(2,1)cycle with HW is most efficient with respect to the wall clock time on the 2562 grid. Since W and Fcycles have the same convergence rate for this model problem, the Wcycle clearly is less efficient than the Fcycle here.
Remark 17.2.21 Table 17.4 shows that it does not pay to use large values for 1 and 2 . This is a general observation. Though the convergence factors become (somewhat) better if the number of smoothing steps is increased, it is more costeffective (i.e., the ratio of convergence versus computational effort decreases) not to smooth the error too much but rather carry out a few more multigrid cycles. In practice, common choices are = 1 + 2 3. This observation can also be verified theoretically. Remark 17.2.22 Although the Vcycle is obviously the most efficient one for Model Problem 1, we will see later that F or Wcycles may be superior for more complicated applications.
Mehrgitterverfahren Full Weighting Cycle V(0,1): V(1,1): V(2,1): V(2,2): F(0,1): F(1,1): F(2,1): F(2,2): W(0,1): W(1,1): W(2,1): W(2,2): Iterations 26 12 10 9 20 10 9 8 20 10 9 8 Time (msec) 1290 759 759 799 1270 819 890 930 2269 1379 1450 1469 Half Weighting Iterations 167 13 9 8 34 10 9 8 34 10 9 8 Time (msec) 7310 740 629 669 1910 740 829 880 3780 1379 1479 1460
241
Tabelle 17.4: Wall clock times and number of iterations for a defect reduction of a factor of 1012 for different cycles and different restriction operators for Model Problem 1.
17.2.6
Full Multigrid (FMG)
An initial approximation for iterative solvers, like multigrid, can be obtained by nested iteration. The general idea of nested iteration is to provide an initial approximation on a grid by the computation and interpolation of approximations on coarser grids. Within an arbitrary iterative process for the solution of a given discrete problem, this principle simply means that a lower (coarser) discretization level is used in order to provide a good initial approximation for the iteration on the next higher (finer) discretization level. The efficiency of iterative multigrid (MGI) can be improved essentially if it is properly combined with the nested iteration idea. This combination is called the full multigrid (FMG) technique. Typically, the FMG scheme is the most efficient multigrid version. FMG has the two fundamental properties:
Mehrgitterverfahren
242
(1) An approximation uFMG of the discrete solution uh can be computed h up to an error uh  uFMG which is approximately equal to the h discretization error u  uh , (2) FMG is an asymptotically optimal method, i.e the number of arithmetic operations required to compute uFMG which approximates uh h up to discretization accuracy is proportional to the number of grid points of h (with only a small constant of proportionality). The first property is explained in the following remark: Remark 17.2.23 Since the discrete solution uh approximates the continuous solution u of the PDE only up to discretization accuracy, in many cases it does not make much sense to solve the discrete problem exactly. We have to live with the discretization error u  uh  anyway. That is why it is usually sufficient to compute an approximation uFMG only up to discretih zation accuracy. By this we mean that uh  uFMG  u  uh  . h More concretely we will derive estimates of the form uh  uFMG  u  uh  , h which immediately implies u  uFMG  (1 + )u  uh  . h (17.20) (17.19) (17.18)
We regard 1 as a sufficiently good value. In that case, "up to discretization accuracy" would mean that we allow a factor of 2 in comparing u  uFMG  with the discretization error. If necessary, much smaller values h of can be achieved, too. Generally, it is not worthwhile to invest more work in a more accurate approximation of uFMG than suggested here because it is more costeffective to h refine the grid once more than to solve the present problem more accurately. It should be kept in mind that the final goal usually is a good approximation to the differential problem, not a good solution to the discrete problem. At this point, we would also like to add a remark, which refers to a question multigrid beginners sometimes have.
Mehrgitterverfahren
243
Remark 17.2.24 In general, it is not sufficient to start the solution process on a very coarse grid, interpolate the approximation of the coarse grid solution to the next finer grid, smooth the visible error components and so on until the finest grid is reached. Actually, the interpolation of the approximation leads to nonnegligible high and low frequency error components on the fine grid that can efficiently be reduced only by a subsequent smoothing of the error on all grid levels, i.e. by revisiting the coarse levels in multigrid cycles. Structure of Full Multigrid Since we have described the basic multigrid iteration only for linear problems so far, we confine ourselves to the linear case also in this section. We want to point out, however, that the FMG technique can immediately be combined with nonlinear multigrid (FAS). As in Section 17.2.4, we consider (17.2), i.e. a sequence of discrete approximations to (17.21). As in Section 17.2.4, we use the notation MGCYCr (k + 1, , wk , k, Lk , fk ) : G(k ) G(k ) (17.21)
for a procedure consisting of r steps of a suitable iterative (k + 1)grid cycle with cycle index for solving (17.2) with initial approximation wk (using grids k , · · · , 0 ). The righthand sides fk on k can be defined recursively k by fk = Ik+1 fk+1 with f = f  or simply by fk = f k . The objective is to achieve discretization accuracy on each level within a few (typically r = 1 or r = 2) multigrid cycles. (In general r may depend on k.) The structure of FMG (with r = 1 and = 1) is illustrated in Figure 17.14. k In contrast to the interpolation in the multigrid cycle Ik1 , which is applied to corrections, the FMG interpolation k from k1 to k transfers k1 approximations of the solution to the fine grid. Moreover, the operator k k1 represents an interpolation procedure which usually is of higher accuracy than the interpolation used within the multigrid iteration. Example 17.3 Often, bicubic interpolation is an appropriate FMG interpolation. If, for instance, (x  3h, y), (x  h, y), (x + h, y) and (x + 3h, y) are points on the coarse grid, where an approximation u2h is known, cubic interpolation can be used to compute an approximation uh (x, y) at the fine
Mehrgitterverfahren
244
= FMG interpolation
Abbildung 17.14: Structure of FMG with r = 1 and = 1 when using l = 4 (i.e. 5 grid levels) grid point (x, y): uh (x, y) =  1 (u2h (x  3h, y) + u2h (x + 3h, y)) 16 9 + (u2h (x  h, y) + u2h (x + h, y)) 16 1 = 1 9 · 9 1 u2h 16
Cubic interpolation in the ydirection is analogous. In detail, the FMG algorithm proceeds as follows: Full Multigrid For k = 0: Solve L0 u0 = f0 , providing uFMG = u0 . 0 For k = 1, 2, . . . , : u0 := k uFMG k k1 k1 uFMG = MGCYCr (k + 1, , u0 , Lk , fk ) k k Here, uFMG denotes the resulting FMG approximation on grid .
Mehrgitterverfahren
245
Later on, we will see that FMG will deliver an approximation up to discretization accuracy if the multigrid cycle converges satisfactorily and if the order of the FMG interpolation is larger than the discretization order of L . A common FMG interpolation for second order accurate discretizations is cubic (multipolynomial) interpolation. Cheaper interpolations especially suited for Model Problem 1 can also be constructed. Computational Work The computational work W FMG needed for the FMG method can easily be estimated. By arguments similar to those in Section 17.2.4, one immediately obtains, for example: 4 rW + 4 W INT for standard coarsening in 2D, 3 1 3 FMG W (17.22) 2rW + 2W INT for semicoarsening in 2D, 1 (again neglecting lower order terms). Here W INT denotes the work needed 1 for the FMG interpolation process from grid 1 to the finest grid and W is the work required for one multigrid cycle on the finest level .
Remark 17.2.25 The number of operations required for FMG is governed by W and W INT . Usually, both of them are O(N ) (see (17.12) or (17.14)). 1 Thus, also FMG requires only O(N ) operations. FMG for Poisson's Equation We now give results of the RedBlack Poisson Solver for FMG starting on the coarsest grid. As the FMG interpolation, we choose cubic interpolation. All other multigrid components are as introduced in Section 17.2.5. In particular we consider the question whether the discretization accuracy O(h2 ) is really achieved after FMG with r = 1. In our model problem we choose the righthand sides f (x, y) and f (x, y) in such a way that a known solution results, for example, u(x, y) = exy . Thus, we can observe the second order accuracy by comparing the discrete numerical solution with the analytical solution on finer and finer grids. Table 17.5 compares the accuracy obtained by FMG (for r = 1) with that of the exact discrete solution for a series of grids.
Mehrgitterverfahren Grid 322 642 1282 2562 u  uh  .31(5) .77(6) .19(6) .48(7) V(0,1) .26(4) .83(5) .27(5) .87(6) V(1,1) .47(5) .12(5) .31(6) .78(7) F(0,1) .86(5) .13(5) .20(6) .48(7) F(1,1) .32(5) .77(6) .19(6) .48(7)
246
Tabelle 17.5: u  uFMG  for different grid sizes, cycle types and number h of relaxations (r = 1) We see that the only cycle which does not achieve second order accuracy for this example is the V(0,1)cycle. For all the others, the discrete error is reduced by a factor of about four when the mesh size is reduced by a factor of two. Correspondingly, the errors are in the range of the discretization error. FMG thus proves to be a very efficient solution method, which costs less than two multigrid cycles. Table 17.6 shows the computing times obtained on a common workstation for the four FMG algorithms in Table 17.5 on the 2562 grid. The V(1,1) and F(0,1)cycles are thus the most efficient ones that achieve second order accuracy for this test problem. V(0,1) 100 V(1,1) 120 F(0,1) 120 F(1,1) 150
Tabelle 17.6: Computing times in milliseconds for FMG on a 2562 grid
Information
48 pages
Report File (DMCA)
Our content is added by our users. We aim to remove reported files within 1 working day. Please use this link to notify us:
Report this file as copyright or inappropriate
264217