Correct handling of Neumann BCs


rasg
07-03-2004, 10:08 PM
I modified mgfas and its friends to so that I can use Neumann boundary conditions and/or imhomogenous boundary conditions. The test problem I'm trying is
del^2 u = rho
with u = sin(pi*x) * sin(1.5*pi*y) as the trial solution.

So there can either be a Neumann condition at y=1 or I can use the inhomogenous Dirichlet condition u(x,1)=-sin(pi*x)

In both cases a single FMG cycle converges to the level of truncation error, as measured by the rms error with the known solution. (Additional V-cycles do not improve the rms error and it goes down as h^2 as I change the finest grid size.) However, the residual after the FMG cycle is huge (compared with the residual of the known solution using the fine grid residual operator) and does not go down as h^2. Further V-cycles only improve the residual by about a factor of 3 each time for the Neumann case (the inhomogenous Dirichlet gets about a factor of 30 each V-cycle).

The automatic termination condition is also being thwarted because the estimated truncation error is even bigger than the residual, so the routine always terminates after one cycle.

I'm wondering if I am handling the boundary conditions incorrectly. What is the right way to do that? In particular, restriction seems problematic. Currently, for the Neumann condition I am just ignoring the parts of the stencil that would reach outside the array. However, that means that at the edges the stencil does not sum to 1, which seems a bit odd. Should I renormalize it? How? Divide by the sum of the entries? Or should I double the interior entries (i.e. pretend there is a false boundary such at u(N+1, j) = u(N-1,j) and derive the stencil using that condition). What about the inhomogenous Dirichlet conditions? Right now I am imposing the condition at all levels. Would it be better to get the boundary conditions on the coarse grids by restricting the fine grid condition? If so, then the issue of normalization of the restriction stencil comes up again... (Also, presumably one does not apply the correction to the boundary.) Are there other subtleties as well?

rasg
07-22-2004, 07:59 PM
Well, I managed to solve my problems (but in doing so I discovered another problem; see my other thread). For posterity, here is what I discovered:

It turns out that I did, in fact, need to modify the restriction stencil at the boundary. At the right boundary, for example, a half weight stencil becomes:

( 0 1/8 0 ) ( 0 1/8 x )
(1/8 1/2 1/8) -> (1/4 1/2 x )
( 0 1/8 0) ( 0 1/8 x )

However, this is only to be used if the linear operator corresponding to the stencil is not symmetrized. For example, the standard 5-point difference star at the right boundary (with ghost points u_{i,N+1}), along with the Neumann condition u_{i,N+1} = u_{i,N} ends up with a coeffient of 2 in the western position, while the eastern coefficient of the adjacent stencil is 1:

(0 1 0) (0 1 x)
(1 -4 1) -> (2 -4 x)
(0 1 0) (0 1 x)

If you divide this stencil by 2 (along with the last column of RHS), then the sparse matrix the system represents is symmetric. In that case I found it was appropriate to use the restriction stencil you get by just ignoring the coefficient that refers to the ghost points.

( 0 1/8 x)
(1/8 1/2 x)
( 0 1/8 x)


For inhomogeneous Dirichlet conditions I found that injection on the boundary works just fine. I also tried a prescription that imagines ghost points whose values are a linear extrapolation of the boundary and the point just inside the boundary. I only tried that with full weighting (where it causes the boundary to be interpolated with the one dimensional version of full weighting) and there was no difference in performance, although I suspect it might be better for nonlinear problems.