Chapter 6, part 4 of 8: more processes
Hello everybody.
This is the fourth in a series of posts that cover chapter 6 of my book (that is, the Monte Carlo framework) and started here. This week, a couple more short examples.
In other news, you can still register for my Introduction to QuantLib Development course; I went on and on about it already, so I’ll leave it at that.
Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the buttons for that are in the footer. Also, make sure to check my Training page.
Example: more processes
In the last post I exposed all the flaws in out implementation of the
Black-Scholes process. For an example of a well-behaved process, you
can look at listing 6.7 instead, which shows the
OrnsteinUhlenbeckProcess
class.
Listing 6.7: Implementation of the OrnsteinUhlenbeckProcess
class.
The Ornstein-Uhlenbeck process is a simple one, whose feature of
interest here is that its mean-reverting drift term \( \theta(\mu -
x) \) and its constant diffusion term \( \sigma \) can be
integrated exactly. Therefore, besides the mandatory drift
and
diffusion
methods, the class also overrides the expectation
and
stdDeviation
methods so that they implement the formulas for their
exact results. The variance
method (in terms of which stdDeviation
is implemented) has two branches in order to prevent numerical
instabilities; for small \( \theta \), the formula for the variance
is replaced by its limit for \( \theta \to 0 \).
Finally, for an example of a multi-dimensional process, we’ll have a
look at the StochasticProcessArray
class, sketched in listing 6.8.
Listing 6.8: Partial implementation of the StochasticProcessArray
class.
This class doesn’t model a specific process, but rather the composition in a single entity of N correlated one-dimensional processes. I’ll use it here to show how correlation information can be included in a process.
Its constructor takes the vector of 1-D processes for the underlyings and their correlation matrix. The processes are stored in the corresponding data member, whereas the correlation is not: instead, the process precomputes and stores its square root. (That would be its matricial square root; that is, \( \sqrt{A} = B \) if \( B B^T = A \).) The constructor also registers with each process, in order to forward any notifications they might send.
Most other methods, such as initialValues
, drift
, or expectation
simply loop over the stored processes, calling their corresponding
methods and collecting the results in an array. The diffusion
method
also loops over the processes, but combines the results with the
correlation: it multiplies each row of its square root by the
diffusion term of the corresponding process, and returns the results
(if you multiply it by its transposed, you’ll find the familiar terms
\( \sigma_i \rho_{ij} \sigma_j \) of the covariance matrix). The
stdDeviation
method does the same, but using the standard deviation
of the underlying processes which also include the passed \( \Delta t
\).
This leaves us with the evolve
method. If we knew that all the
processes behaved reasonably (i.e., by adding the calculated
variations to the values of their variables) we might just inherit the
default implementation which takes the results of the expectation
and stdDeviation
methods, multiplies the latter by the array of
random variates, and adds the two terms. However, we don’t have such
guarantee, and the method needs to be implemented in a different
way. First, it deals with the correlation by multiplying the Gaussian
variates by its square root, thus obtaining an array of correlated
random variates. Then, it calls the evolve
method of each
one-dimensional process with the respective arguments and collects the
results.