# Chapter 6, part 6 of 8: Monte Carlo models

Welcome back.

This is a sixth in a series of posts covering chapter 6 of my book (here is the first one). This week: Monte Carlo models.

And of course, you can still register for my *Introduction to
QuantLib Development*
course:
click on the link for more information.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the buttons for that are in the footer. Also, make sure to check my Training page.

### Putting it all together

Having read all about the different pieces of a model in the previous posts, you’d expect me to start assembling them and finally get a working Monte Carlo model. However, before doing so, we still have the task of picking a specific set of pieces among those available in our toolbox. This will be the subject of the next subsection—after which we’ll build the model, developer’s honor.

#### Monte Carlo traits

As I mentioned before, there’s a number of choices we have to make in order to implement a Monte Carlo model. Depending on the dimension of the problem and the stochastic process we choose, we can use one- or multi-dimensional paths and generators; or we can use pseudo-random numbers or low-discrepancy sequences (and even though I haven’t listed them, there’s quite a few algorithms available for either type).

For at least one of those choices, dynamic polymorphism is not an option: one-dimensional and multi-dimensional generators can’t have a common interface, since their methods have different input and return types. Therefore, we’ll go with static polymorphism and supply the required classes to the model as template arguments.

The problem is, this can result very quickly in unwieldy declarations; the sight of something like

can bring shudders to the most seasoned developer. Sure, `typedef`

s
can help; by using them, a user might assign mnemonics to default
choices and shorten the declaration. Default template arguments might
alleviate the pain, too. However, we went for a mechanism that also
allowed us on the one hand, to define mnemonics for commonly used
groups of related choices; and on the other hand, to add helper
methods to the mix. To do this, we decided to use traits [1].

Listing 6.12: Example of random-number generation traits.

Listing 6.12 shows the default traits classes for pseudo-random and
low-discrepancy number generation. First comes the
`GenericPseudoRandom`

class template. It takes as template arguments
the type of a uniform pseudo-random number generator and that of an
inverse-cumulative function object, and builds a number of other types
upon it. The type of the passed generator itself is defined as
`urng_type`

—emphasis on “`u`

” for uniform and “`n`

” for
number. Based on this type, it defines `rng_type`

, no longer uniform
since it uses the inverse-cumulative function to return numbers
according to its distribution; `ursg_type`

, where the “`n`

” is
replaced by “`s`

” for sequence; and finally `rsg_type`

, which
generates sequences of numbers according to the passed
distribution. The compile-time constant `allowErrorEstimate`

, written
as an enumeration to satisfy older compilers (it should really be a
`static const bool`

) tells us that this generator allows us to
estimate the Monte Carlo error as function of the number of samples;
and the helper function `make_sequence_generator`

makes it easier to
create a generator based on the passed inputs.

Then, we instantiate the class template with our weapons of
choice. For the basic generator, that would be the
`MersenneTwisterUniformRng`

class; for the function object, the
`InverseCumulativeNormal`

class, since we’ll most often want normally
distributed numbers. The resulting traits class will be our default
for pseudo-random generation; fantasy being not our strong suit, we
defined it as the `PseudoRandom`

class.

The `GenericLowDiscrepancy`

class template is defined is a similar
way, but with two differences. Since low-discrepancy numbers are
generated in sequences, the types for single-number generation are
missing; and the enumeration tells us that we can’t forecast the
statistical error we’ll get. We define the `LowDiscrepancy`

traits
class as the one obtained by selecting the `SobolRsg`

class as our
generator and, again, the `InverseCumulativeNormal`

class as our
function object.

Finally, we defined a couple of traits classes to hold types related
to specific Monte Carlo functionality, such as the types of used
paths, path generators, and path pricers. They are shown in listing
6.13: the `SingleVariate`

class holds the types we need for 1-D
models, while the `MultiVariate`

class holds the types for
multi-dimensional ones. They are both template classes, and take as
their template argument a traits class for random-number generation.

Listing 6.13: Example of Monte Carlo traits.

By combining the provided RNG and Monte Carlo traits (or any traits classes that one might want to define, if one wants to use any particular type) not only we can provide a model with all the necessary information, but we can do it with a simpler and more mnemonic syntax, such as

the idea being to move some complexity from users to developers. We have to use some template tricks to get the above to work, but when it does, it’s a bit more readable (and writable) for users. But that’s for next section, in which we finally assemble a Monte Carlo model.

#### The Monte Carlo model

Listing 6.14 shows the `MonteCarloModel`

class, which is the low-level
workhorse of Monte Carlo simulations.

Listing 6.14: Implementation of the `MonteCarloModel`

class template.

It brings together path generation, pricing and statistics, and as
such takes template arguments defining the types involved: a `MC`

traits class defining types related to the simulation, a `RNG`

traits
class describing random-number generation, and a statistics class `S`

defaulting to the `Statistics`

class. (Don’t worry. I’m not going off
another tangent, even though an early outline of this chapter had a
section devoted to statistics. If you’re interested, the `Statistics`

class is in appendix A of the book.) The `MC`

class is a template template argument,
so that it can be fed the `RNG`

traits (as shown in the previous
section; see for instance the `MultiVariate`

class).

The class defines aliases for a few frequently used types; most of
them are extracted from the traits, by instantiating the `MC`

class
template with the `RNG`

class. The resulting class provides the types
of the path generator and the path pricer to be used; from those, in
turn, the type of the sample paths and that of the returned prices can
be obtained.

The constructor takes the pieces that will be made to work together; at least a path generator (well, a pointer to one, but you’ll forgive me for not spelling out all of them), a path pricer, and an instance of the statistics class, as well as a boolean flag specifying whether to use antithetic variates. Then, there are a few optional arguments related to control variates: another path pricer, the analytic value of the control variate, and possibly another path generator. Optional arguments might not be the best choice, since they make it possible to pass a control variate path pricer and not the corresponding analytic value; it would have been safer to have a constructor with no control variate arguments, and another constructor with both path pricer and analytic value being mandatory and with an optional path generator. However, the current version saves a few lines of code. The constructor copies the passed arguments into the corresponding data members, and sets another boolean flag based on the presence or the lack of the control-variate arguments.

The main logic is implemented in the `addSamples`

method. It’s just a
loop: draw a path, price, add the result to the statistics; but it
includes a bit of complication in order to take care of variance
reduction. It takes the number of samples to add; for each of them, it
asks the path generator for a path, passes the path to the pricer, and
gets back a price. In the simplest case, that’s all there is to it;
the price can just be added to the statistics (together with the
corresponding weight, also returned from the generator) and the loop
can start the next iteration. If the user passed control-variate data,
instead, things get more interesting. If no path generator were
specified, we pass to the alternate pricer the same path we used for
the main one; otherwise, we ask the second generator for a path and we
use that one. In both cases, we adjust the baseline price by
subtracting the simulated price of the control and adding its analytic
value.

It’s not over yet. If the user also asked for antithetic variates, we repeat the same dance (this time asking the generator, or the generators, for the paths antithetic to the ones we just used) and we add to the statistics the average of the regular and antithetic prices; if not, we just add the price we obtained on the original paths. Lather, rinse, and repeat until the required number of samples is reached.

Finally, the full results (mean price and whatnot) can be obtained by
calling the `sampleAccumulator`

method, which returns a reference to
the stored statistics. “Accumulator” is STL lingo; we should probably
have used a method name taken from the financial domain instead. Such
as, I don’t know, “statistics.” Oh well.

Next time: Monte Carlo simulations.

#### Bibliography

[1] N.C. Myers, *Traits: a new and useful template technique*. In *The
C++ Report*, June 1995.