# Chapter 8, part 2 of n: evolution schemes

Welcome back.

This week, the second part of the series on the finite-differences framework that started in the last post. And also this week, I’ll be releasing QuantLib 1.6. Stay tuned.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the buttons for that are in the footer. Also, make sure to check my Training page.

### Evolution schemes

In a partial differential equation \( {\partial f}/{\partial t} = L f \), the operators in the previous subsection provide a discretization of the derivatives on the right-hand side. Evolution schemes discretize the time derivative on the left-hand side instead.

As you know, finite-difference methods in finance start from a known state \( \mathbf{f}(T) \) at maturity (the payoff of the derivative) and evolve backwards to today’s date. At each step, we need to evaluate \( \mathbf{f}(t) \) based on \( \mathbf{f}(t+\Delta t) \).

The simplest way is to approximate the equation above as

\[\frac{\mathbf{f}(t+\Delta t) - \mathbf{f}(t)}{\Delta t} = L \cdot \mathbf{f}(t+\Delta t)\]which simplifies to \( \mathbf{f}(t) = \mathbf{f}(t+\Delta t) - \Delta t \cdot L \cdot \mathbf{f}(t+\Delta t) \), or \( \mathbf{f}(t) = (I - \Delta t \cdot L) \cdot \mathbf{f}(t+\Delta t) \). It’s a simple matrix multiplication, and updating the function array translates into

(in actual code, of course, the `I + dt*L`

operator would be
precalculated and stored across time steps).

A slightly less simple way is to write

\[\frac{\mathbf{f}(t+\Delta t) - \mathbf{f}(t)}{\Delta t} = L \cdot \mathbf{f}(t)\]which results into \( (I + \Delta t \cdot L) \cdot \mathbf{f}(t) = \mathbf{f}(t + \Delta t) \). This is a more complex step, since it requires solving a linear system (or inverting the operator, which is the same); but given the interface of our operators, the update translates equally simply into

Now, this is where using tridiagonal operators pays off. Let me
oversimplify, so I can keep a long story short: schemes of the first
kind (called *explicit* schemes) are simpler but less
stable. *Implicit* schemes, instead (those of the second kind), are
more stable and can be used with larger time steps. If the operator
were a generic one, the price for an implicit step would be \( O(N^3)
\) in the size of the grid, making it unpractical when compared to
the \( O(N^2) \) cost for an explicit step. For tridiagonal
operator, though, both methods cost \( O(N) \), which allows us to
reap the benefits of explicit steps with no additional price.

Going back to the library: the two schemes above (called *explicit
Euler* and *implicit Euler* scheme, respectively) are both
available. They are two particular specializations of a generic
`MixedScheme`

class template, shown in the listing that follows, which
models the discretization scheme

where an implicit and explicit step are mixed. As you can see from the formula, \( \theta = 0 \) gives the explicit Euler scheme and \( \theta = 1 \) gives implicit Euler; any other value in the middle can be used (for instance, \( 1/2 \) gives the Crank-Nicolson scheme).

As I mentioned before, schemes take the type of their differential
operators as a template argument. The `MixedScheme`

class does that as
well, and extracts a number of types from the operator class by means
of the `OperatorTraits`

template (that I’ll describe later on).

The constructor of the `MixedScheme`

class takes the differential
operator \( L \), the value of \( \theta \), and a set of boundary
conditions (more on that later) and stores them. Another method,
`setStep`

, stores the \( \Delta t \) to be used as time step and
precomputes the operators \( I - (1-\theta) \cdot \Delta t \cdot L
\) and \( I + \theta \cdot \Delta t \cdot L \) on which the
`applyTo`

and `solveFor`

methods will be called; the two special cases
\( \theta = 0 \) and \( \theta = 1 \) are checked in order to
avoid unnecessary computations. This is done in a separate method in
case we needed to change the time step in the middle of a simulation;
by being able to reset the time step, we can reuse the same scheme
instance.

Finally, the `step`

method performs the actual evolution of the
function array. The implementation shown in the listing is just a
basic sketch that I’ll be improving in the next subsections; here, it
just performs the explicit and implicit part of the scheme in this
order (again, avoiding unnecessary operations).

The few schemes provided by the library (such as the `ExplicitEuler`

,
shown in the listing) are all specializations of the `MixedScheme`

class (I’m just talking about the old framework here; the new
framework has different ones). Again, there’s no base class for an
evolution scheme; other parts of the framework will take them as a
template argument and expect them to provide the `setStep`

and `step`

methods.