# NNESMO: Session 6 – Solving non-linear PDEs

# NNESMO: Session 6 – Solving non-linear PDEs

Dr Simon A Mathias

Department of Earth Sciences

Durham University

## Contents

- This session’s learning objectives
- The Burgers’ equation (BE)
- ETCS – Explicit time-stepping with central differences in space
- ITBS – Implicit time-stepping with backward differences
- The Newton iteration method
- SITBS – Semi-implicit time-stepping with backward differences
- Classroom Assignment

## This session’s learning objectives

At the end of this session you should be able to:

- Derive multiple finite difference solutions for the Burgers’ equation.
- Appreciate the difficulty of specifying stability criteria for non-linear problems.
- Solve non-linear partial differential equations (PDE) using a semi-implicit scheme.
- Use a Newton iteration method to develop a fully implicit method for non-linear PDEs.
- Solve non-linear PDEs using MATLAB’s ODE solver, ODE15s.

## The Burgers’ equation (BE)

The Burgers’ equation (BE) represents a fluid momentum conservation statement in the presence of neglible pressure changes and body forces. For a one-dimensional system, the BE can be written out as follows:

*_*__(1a)

where [ ] is velocity, [ ] is distance, [ ] is time and [ ] is dynamic viscosity.

The BE is interesting to study from a computational perspective because it represents a simple modification of the advection diffusion equation (ADE) such that a non-linear partial differential equation (PDE) is produced. Comparing Eq. (1a) with Eq. (1) from the previous session it can be seen that and . Non-linear forms of the ADE come about in the Earth sciences for a variety of different applications. For example, fluid flow through unsaturated porous media (the Richards’ equation), or heat conduction through rock formations where the material properties (e.g., specific heat capacity and conductivity) have a strong dependence on temperature.

As will be seen subsequently, adding just a small amount of non-linearity to a PDE leads to the need for quite different treatment when seeking to develop numerical solutions using approximate techniques such as finite differences.

For the next few examples, the following initial and boundary conditions are considered:

*_*__(1b)

where [ ] is the initial velocity of a region of [ ] length and [ ] represents the distance between the two boundary conditions.

For the special case where , the above system of equations can be solved exactly to obtain (McElwaine, 2016, pers. com.)

*_*__(1c)

To evaluate the analytical solution, type in the following code in a single script-file and save it in an appropriate place as “BurgersStudy.m”.

function BurgersStudy %A sequence of different subfunctions to solve the Burgers' equation. %Define model parameters nu=0.1; %m^2/day u0=1.2; %m/day x0=0.7; %m %Define location of points for plotting x=linspace(-4,6,100); %Define values of time (days) to be used for plotting t=[0.5 1 2 4 8]; %Evaluate the exact solution uA=AnalSol(x,t,nu,u0,x0); %Plot results figure(1) hold off plot(x,uA,'.') xlabel('Distance (m)') ylabel('Velocity (m/day)') for n=1:numel(t) MYlegend{n}=[num2str(t(n)) ' days']; end legend(MYlegend,'location','northwest') %********************************************************************** function u=AnalSol(x,t,nu,u0,x0)

%Subfunction containing the analytical solution [x,t]=ndgrid(x,t); x1=(x+x0-u0*t)./sqrt(4*nu*t); x2=(x-x0-u0*t)./sqrt(4*nu*t); x3=(x0+x)./sqrt(4*nu*t); x4=(x0-x)./sqrt(4*nu*t); c=u0/2/nu; T3=exp(c*(x+x0-u0*t/2)); T4=exp(c*(x-x0-u0*t/2)); u=u0./(1+(T3.*erfc(x3)+T4.*erfc(x4))./(erfc(x2)-erfc(x1)));

The proceeding outline of this session is as follows. First a simple explicit time-stepping scheme for the Burgers’ equation is devloped and its stability requirement is assessed. Following on from this, a more stable implicit scheme is developed. However, because of the non-linearity in the Burgers’ equation, the implicit scheme cannot be implemented directly. Therefore, the Newton iteration method is introduced as a methodology for iteratively solving the implicit scheme. Subsequently it is shown that the first iteration of the fully implicit scheme gives rise to an alternative unconditionally stable scheme, referred to as a semi-implicit scheme. Finally, there is a classroom assignment, which involves modifying code from “ADEStudy.m” (developed during the previous session) to evaluate the various possible finite difference schemes that can be used to solve the Burgers’ equation.

## ETCS – Explicit time-stepping with central differences in space

First we will develop an Euler explicit time-stepping scheme for the BE problem described above and use a central difference approximation for the term. Ignoring all the truncation errors associated with the Taylor expansions (discussed in the previous session), the BE can then be written as

which can be rearranged to get

where

or alternatively it can be said that

where

Hereafter, both of the above schemes will be referred to as the ETCS scheme.

Recall that for numerical stability of explicit time-stepping schemes to be ensured, it is generally found that the coefficients, , and (or , and ), need always to be positive. For the case when , , , and are positive, it follows that there are two stability criteria for these schemes:

where and can be thought of as being an effective Courant number and effective Peclet number, respectively.

The issue here, which is common to most non-linear PDEs, is that it is not straightforward to specify a priori a and that will ensure stability of the system for the desired simulation duration.

Considering our experience from the previous session, it is postulated that an impliciting time-stepping scheme with a backward difference for the is likely to lead to an unconditionally stable scheme.

## ITBS – Implicit time-stepping with backward differences

If we consider an Euler implicit time-stepping scheme and use a backward difference approximation for the term, ignoring all the truncation errors associated with the Taylor expansions, the BE can be written as

which can be rearranged to get

*_*__(3a)

where

or alternatively it can be said that

where

Hereafter this scheme will be referred to as the ITBS scheme.

Recalling from the previous session, for implicit time-stepping schemes, the coefficients and (or and ) need to be negative and the coefficents (or ) need to be positive to ensure stability. Consequently, for the case when , , , and are positive, the ITBS scheme should be unconditionally stable. However, for the situation dicated by the boundary conditions given in Eq. (1b), even for positive , there will be a region where .

Despite the fact that ITBS does not appear to be unconditionally stable, it can be appreciated that this scheme is likely to be much more stable than the ETCS scheme.

However, recall that the , and vectors of coefficients represent the first, second and third diagonals of a tridiagonal Jacobian matrix, , defined by the formula

*_*__(3b)

Another challenge here is that evaluation of also requires values from the vector. Consequently Eq. (3b) must be solved iteratively.

## The Newton iteration method

The Newton iteration method (also referred to as the Newton-Raphson method), is a simple root finding algorithm often used to solve non-linear PDEs using implicit time-stepping schemes such as Eq. (3b) above.

The Newton iteration method for root finding can be simply derived as follows:

Consider a function, . It is desired to find a value, , such that (i.e., a root of ) that is situated close to a value .

The equation for a line, , that lies tangent to at can be written as

*_*__(4a)

A good estimate for a value of , which we will call , can be found by extrapolating the tangent equation, Eq. (4a), to , i.e.:

which can be rearranged to get

from which it can be envisaged that a progressively closer estimate of can be obtained from the recursive relationship

*_*__(4b)

where denotes the number of iterations that have been undertaken.

The above system can also be used to solve systems of non-linear equations such as Eq. (3b). In this case it is desired to find the roots of the vector, , where denotes results from the m-th iteration, which satisfies

from which it can be seen that

Further consideration of Eq. (4b) then suggests that a progressively closer estimate of can be obtained from

which reduces to

*_*__(4c)

where is the Jacobian matrix (as defined in Eq. (3b)) as determined assuming .

A good starting point is to try .

The iterative process should proceed until the scheme has achieved an acceptable level of local convergence. A suitable criterion for the accepting of as a good enough approximation of can be written as follows:

where denotes the number of points that has been discretised into.

## SITBS – Semi-implicit time-stepping with backward differences

Note that if only one iteration of the above scheme is considered we have

which can be rearranged to get

*_*__(5a)

where

which, providing , can be seen to be unconditionally stable.

This scheme should be hereafter referred to as SITBS.

Such a scheme is often referred to as a semi-implicit scheme or a mixed implicit and explicit scheme because it is implicit in the sense that Eq. (5a) cannot be solved explicitly for , but explicit in terms of the fact that the terms in the and coefficients are taken from the previous time-step. Such schemes are often implemented as a simple alternative to fully implicit schemes in practice.

## Classroom Assignment

1) Modify the code developed in ADEStudy.m (developed during the previous session) to add an appropriate subfunction to BurgersStudy.m to solve the problem described by Eqs. (1a) and (1b) using the SITBS scheme and compare your results to the analytical solution given in Eq. (1c). Try using m and days. Take special care when incorporating the , and vectors into your Jacobian matrix using spdiags. Check the help documentation for spdiags to be sure.

2) Modify your SITBS scheme further to obtain the ITBS scheme using the Newton iteration method described above. Specify a maximum number of iterations of 100. How many iterations are required for convergence? How much more accurate is ITBS as compared SITBS?

3) Solve the same problem using ODE15s with backward differencing. Note that specifying the Jacobian pattern (as we did in previous sessions) should lead to a considerable CPU time saving. How does this solution compare to ITBS? What happens if you set ? Try using tic and toc (see help documentation) to determine which of your approximate solutions is most efficient? What happens if you use a central differencing scheme instead?

Published with MATLABĀ® 7.13