Banner

Introduction to QuantLib Development with Luigi Ballabio, London, UK. June 29th to July 1st, 2014.
Early Bird discounts available before April 30th. Register now!

Monday, March 23, 2015

Chapter 7, part 2 of 6: examples of discretized assets

Hello everybody.

This week, the second part of a series on the QuantLib tree framework that started in the previous post. As usual, it's taken from my book.

In case you missed it: we set the dates for my next Introduction to QuantLib Development course, which will be in London from June 29th to July 1st. More info at this link. You can get an early-bird discount until April 30th.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

Example: discretized bonds

Vanilla bonds (zero-coupon, fixed-rate and floating-rate) are simple enough to work as first examples, but at the same time provide a range of features large enough for me to make a few points.

Well, maybe not the zero-coupon bond. It's probably the simplest possible asset (bar one with no payoff) so it's not that interesting on its own; however, it's going to be useful as a helper class in the examples that follow, since it provides a way to estimate discount factors on the lattice.

Its implementation is shown in the following listing.
    class DiscretizedDiscountBond : public DiscretizedAsset {
      public:
        DiscretizedDiscountBond() {}
        void reset(Size size) {
            values_ = Array(size, 1.0);
        }
        std::vector<Time> mandatoryTimes() const {
            return std::vector<Time>();
        }
    };
It defines no mandatory times (because it will be happy to be initialized at any time you choose), it performs no adjustments (because nothing ever happens during its life), and its reset method simply fills each value in the array with one unit of currency. Thus, if you initialize an instance of his class at a time \( T \) and roll it back on a lattice until an earlier time \( t \), the array values will then equal the discount factors from \( T \) to \( t \) as seen from the corresponding lattice nodes.


Things start to get more interesting when we turn to fixed-rate bonds. QuantLib doesn't provide discretized fixed-rate bonds at this time; the listing below shows a simple implementation which works as an example, but should still be improved to enter the library.
    class DiscretizedFixedRateBond : public DiscretizedAsset {
      public:
        DiscretizedFixedRateBond(const vector<Time>& paymentTimes,
                                 const vector<Real>& coupons,
                                 Real redemption)
        : paymentTimes_(paymentTimes), coupons_(coupons),
          redemption_(redemption) {}

        vector<Time> mandatoryTimes() const {
            return paymentTimes_;
        }
        void reset(Size size) {
            values_ = Array(size, redemption_);
            adjustValues();
        }
      private:
        void postAdjustValuesImpl() {
            for (Size i=0; i<paymentTimes_.size(); i++) {
                Time t = paymentTimes_[i];
                if (t >= 0.0 && isOnTime(t)) {
                    addCoupon(i);
                }
            }
        }
        void addCoupon(Size i) {
            values_ += coupons_[i];
        }

        vector<Time> paymentTimes_;
        vector<Real> coupons_;
        Real redemption_;
    };

The constructor takes and stores a vector of times holding the payment schedule, the vector of the coupon amounts, and the amount of the redemption. Note that the type of the arguments is different from what the corresponding instrument class is likely to store (say, a vector of CashFlow instances); this implies that the pricing engine will have to perform some conversion work before instantiating the discretized asset.

The presence of a payment schedule implies that, unlike the zero-coupon bond above, this bond cannot be instantiated at just any time; and in fact, this class implements the mandatoryTimes method by returning the vector of the payment times. Rollback on the lattice will have to stop at each such time, and initialization will have to be performed at the maturity time of the bond, i.e., the latest of the returned times. When one does so, the reset method will fill each value in the array with the redemption amount and then call the adjustValues method, which will take care of the final coupon.

In order to enable adjustValues to do so, this class overrides the virtual postAdjustValuesImpl method. (Why the post- version of the method and not the pre-, you say? Bear with me a bit longer: all will become clear.) The method loops over the payment times and checks, by means of the probably poorly named isOnTime method, whether any of them equals the current asset time. If this is the case, we add the corresponding coupon amount to the asset values. For readability, the actual work is factored out in the addCoupon method. The coupon amounts will be automatically discounted as the asset is rolled back on the lattice.


Finally, let's turn to the floating-rate bond, shown in the listing that follows; this, too, is a simplified implementation.
    class DiscretizedFloatingRateBond : public DiscretizedAsset {
      public:
        DiscretizedFloatingRateBond(const vector<Time>& paymentTimes,
                                    const vector<Time>& fixingTimes,
                                    Real notional)
        : paymentTimes_(paymentTimes), fixingTimes_(fixingTimes),
          notional_(notional) {}

        vector<Time> mandatoryTimes() const {
            vector<Time> times = paymentTimes_;
            std::copy(fixingTimes_.begin(), fixingTimes_.end(),
                      back_inserter(times));
            return times;
        }
        void reset(Size size) {
            values_ = Array(size, notional_);
            adjustValues();
        }
      private:
        void preAdjustValuesImpl() {
            for (Size i=0; i<fixingTimes_.size(); i++) {
                Time t = fixingTimes_[i];
                if (t >= 0.0 && isOnTime(t)) {
                    addCoupon(i);
                }
            }
        }
        void addCoupon(Size i) {
            DiscretizedDiscountBond bond;
            bond.initialize(method(), paymentTimes_[i]);
            bond.rollback(time_);

            for (Size j=0; j<values_.size(); j++) {
                 values_[j] += notional_ * (1.0 - bond.values()[j]);
            }
        }

        vector<Time> paymentTimes_;
        vector<Time> fixingTimes_;
        Real notional_;
    };
The constructor takes and stores the vector of payment times, the vector of fixing times, and the notional of the bond; there are no coupon amounts, since they will be estimated during the calculation. For simplicity of implementation, we'll assume that the accrual time for the i-th coupon equals the time between its fixing time and its payment time.

The mandatoryTimes method returns the union of payment times and fixing times, since we'll need to work on both during the calculations. The reset method is similar to the one for fixed-rate bonds, and fills the array with the redemption value (which equals the notional of the bond) before calling adjustValues.

Adjustment is performed in the overridden preAdjustValuesImpl method. (Yes, the pre- version. Patience.) It loops over the fixing times, checks whether any of them equals the current time, and if so calls addCoupon method.

Now, like the late Etta James in one of her hits, you'd be justified in shouting "Stop the wedding". Of course the coupon should be added at the payment date, right? Well, yes; but the problem is that we're going backwards in time. In general, at the payment date we don't have enough information to add the coupon; it can only be estimated based on the value of the rate at an earlier time that we haven't yet reached. Therefore, we have to keep rolling back on the lattice until we get to the fixing date, at which point we can calculate the coupon amount and add it to the bond. In this case, we'll have to take care ourselves of discounting from the payment date, since we passed that point already.

That's exactly what the addCoupon method does. First of all, it instantiates a discount bond at the payment time \( T \) and rolls it back to the current time, i.e., the fixing date \( t \), so that its value equal at the \( j \)-th node the discount factors \( D_j \) between \( t \) and \( T \). From those, we could estimate the floating rates \( r_j \) (since it must hold that \( 1 + r_j(T-t) = 1/D_j \)) and then the coupon amounts; but with a bit of algebra, we can find a simpler calculation. The coupon amount \( C_j \) is given by \( Nr_j(T-t) \), with \( N \) being the notional; and since the relation above tells us that \( r_j(T-t) = 1/D_j - 1 \), we can substitute that to find that \( C = N(1/D_j - 1) \). Now, remember that we already rolled back to the fixing date, so if we add the amount here we also have to discount it because it won't be rolled back from the payment date. This means that we'll have to multiply it by \( D_j \), and thus the amount to be added to the \( j \)-th value in the array is simply \( C_j = N(1/D_j - 1)D_j = N(1-D_j) \). The final expression is the one that can be seen in the implementation of addCoupon.

As you probably noted, the above hinges on the assumption that the accrual time equals the time between payment and fixing time. If this were not the case, the calculation would no longer simplify and we'd have to change the implementation; for instance, we might instantiate a first discount bond at the accrual end date to estimate the floating rate and the coupon amount, and a second one at the payment date to calculate the discount factors to be used when adding the coupon amount to the bond value. Of course, the increased accuracy would cause the performance to degrade since addCoupon would roll back two bonds, instead of one. You can choose either implementation based on your requirements.

Example: discretized option

Sorry to have kept you waiting, folks. Here is where I finally explain the pre- vs post-adjustment choice, after the previous example helped me put my ducks in a row. I'll do so by showing an example of an asset class (the DiscretizedOption class, shown in the listing below) that can be used to wrap an underlying asset and obtain an option to enter the same: for instance, it could take a swap and yield a swaption. The implementation shown here is a slightly simplified version of the one provided by QuantLib, since it assumes a Bermudan exercise (or European, if one passes a single exercise time). Like the implementation in the library, it also assumes that there's no premium to pay in order to enter the underlying deal.
    class DiscretizedOption : public DiscretizedAsset {
      public:
        DiscretizedOption(
                  const shared_ptr<DiscretizedAsset>& underlying,
                  const vector<Time>& exerciseTimes)
        : underlying_(underlying), exerciseTimes_(exerciseTimes) {}

        vector<Time> mandatoryTimes() const {
            vector<Time> times = underlying_->mandatoryTimes();
            for (Size i=0; i<exerciseTimes_.size(); ++i)
                if (exerciseTimes_[i] >= 0.0)
                    times.push_back(exerciseTimes_[i]);
            return times;
        }
        void reset(Size size) {
            QL_REQUIRE(method() == underlying_->method(),
                       "option and underlying were initialized on "
                       "different lattices");
            values_ = Array(size, 0.0);
            adjustValues();
        }
      private:
        void postAdjustValuesImpl() {
            underlying_->partialRollback(time());
            underlying_->preAdjustValues();
            for (Size i=0; i<exerciseTimes_.size(); i++) {
                Time t = exerciseTimes_[i];
                if (t >= 0.0 && isOnTime(t))
                    applyExerciseCondition();
            }
            underlying_->postAdjustValues();
        }
        void DiscretizedOption::applyExerciseCondition() {
            for (Size i=0; i<values_.size(); i++)
                values_[i] = std::max(underlying_->values()[i],
                                      values_[i]);
        }

        shared_ptr<DiscretizedAsset> underlying_;
        vector<Time> exerciseTimes_;
    };
Onwards. The constructor takes and, as usual, stores the underlying asset and the exercise times; nothing to write much about.

The mandatoryTimes method takes the vector of times required by the underlying and adds the option's exercise times. Of course, this is done so that both the underlying and the option can be priced on the same lattice; the sequence of operations to get the option price will be something like:
    underlying = shared_ptr<DiscretizedAsset>(...);
    option = shared_ptr<DiscretizedAsset>(
                new DiscretizedOption(underlying, exerciseTimes));
    grid = TimeGrid(..., option->mandatoryTimes());
    lattice = shared_ptr<Lattice>(new SomeLattice(..., grid, ...));
    underlying->initialize(lattice, T1);
    option->initialize(lattice, T2);
    option->rollback(t0);
    NPV = option->presentValue();
in which, first, we instantiate both underlying and option and retrieve the mandatory times from the latter; then, we create the lattice and initialize both assets (usually at different times, e.g., the maturity date for a swap and the latest exercise date for the corresponding swaption); and finally, we roll back the option and get its price. As we'll see in a minute, the option also takes care of rolling the underlying back as needed.

Back to the class implementation. The reset method performs the sanity check that underlying and option were initialized on the same lattice, fills the values with zeroes (what you end up with if you don't exercise), and then calls adjustValues to take care of a possible exercise.

Which brings us to the crux of the matter, i.e., the postAdjustValuesImpl method. The idea is simple enough: if we're on an exercise time, we check whether keeping the option is worth more than entering the underlying asset. To do so, we roll the underlying asset back to the current time, compare values at each node, and set the option value to the maximum of the two; this latest part is abstracted out in the applyExerciseCondition method.

The tricky part of the problem is that the underlying might need to perform an adjustment of its own when rolled back to the current time. Should this be done before or after the option looks at the underlying values?

It depends on the particular adjustment. Let's look at the bonds in the previous example. If the underlying is a discretized fixed-rate bond, and if the current time is one of its payment times, it needs to adjust its values by adding a coupon. This coupon, though, is being paid now and thus is no longer part of the asset if we exercise and enter it. Therefore, the decision to exercise must be based on the bond value without the coupon; i.e., we must call the applyExerciseCondition method before adjusting the underlying.

The discretized floating-rate bond is another story. It adjusts the values if the current time is one of its fixing times; but in this case the corresponding coupon is just starting and will be paid at the end of the period, and so must be added to the bond value before we decide about exercise. Thus, the conclusion is the opposite: we must call applyExerciseCondition after adjusting the underlying.

What should the option do? It can't distinguish between the two cases, since it doesn't know the specific behavior of the asset it was passed; therefore, it lets the underlying itself sort it out. First, it rolls the underlying back to the current time, but without performing the final adjustment (that's what the partialRollback method does); instead, it calls the underlying's preAdjustValues method. Then, if we're on an exercise time, it performs its own adjustment; and finally, it calls the underlying's postAdjustValues method.

This is the reason the DiscretizedAsset class has both a preAdjustValues and a postAdjustValues method; they're there so that, in case of asset composition, the underlying can choose on which side of the fence to be when some other adjustment (such as an exercise) is performed at the same time. In the case of our previous example, the fixed-rate bond will add its coupon in postAdjustValues and have it excluded from the future bond value, while the floating-rate bond will add its coupon in preAdjustValues and have it included.

Unfortunately, this solution is not very robust. For instance, if the exercise dates were a week or two before the coupon dates (as is often the case) the option would break for fixed-rate coupons, since it would have no way to stop them from being added before the adjustment. The problem can be solved: in the library, this is done for discretized interest-rate swaps by adding fixed-rate coupons on their start dates, much in the same way as floating-rate coupons. Another way to fix the issue would be to roll the underlying back only to the date when it's actually entered, then to make a copy of it and roll the copy back to the exercise date without performing any adjustment. Both solutions are somewhat clumsy at this time; it would be better if QuantLib provided some means to do it more naturally.

Liked this post? Share it:

Monday, March 16, 2015

Chapter 7, part 1 of 6: the tree framework

Welcome back.

This week, some content from my book (which, as I announced last week, is now available as an ebook from Leanpub; thanks to all those who bought it so far). This post is the first part of a series that will describe the QuantLib tree framework.

But first, some news: with the help and organization of the good Jacob Bettany from MoneyScience, I'll hold a new edition of my Introduction to QuantLib Development course. It's an intensive three-day course in which I explain the architecture of the library and guide attendees through exercises that let them build new financial instruments based on the frameworks I describe. It will be in London from June 29th to July 1st, and you can find more info and a brochure at this link. An early-bird discount is available until April 30th.

And in case you can't make it to London, the March sale at QuantsHub continues, so you might consider the workshop I recorded for them instead. (Oh, and the intro is available on YouTube now.)

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

The tree framework

Together with Monte Carlo simulations, trees are among the most commonly used tools in quantitative finance. As usual, the dual challenge for a framework is to implement a number of reusable (and possibly composable) pieces and to provide customization hooks for injecting new behavior. The QuantLib tree framework has gone through a few revisions, and the current version is a combination of object-oriented and generic programming that does the job without losing too much performance in the process.

The Lattice and DiscretizedAsset classes

The two main classes of the framework are the Lattice and DiscretizedAsset classes. In our intentions, the Lattice class (shown in the listing below) was to model the generic concept of a discrete lattice, which might have been a tree as well as a finite-difference grid. This never happened; the finite-difference framework went its separate way and is unlikely to come back any time soon. However, the initial design helped keeping the Lattice class clean: to this day, it contains almost no implementation details and is not tied to trees.
    class Lattice {
      public:
        Lattice(const TimeGrid& timeGrid) : t_(timeGrid) {}
        virtual ~Lattice() {}

        const TimeGrid& timeGrid() const { return t_; }

        virtual void initialize(DiscretizedAsset&,
                                Time time) const = 0;
        virtual void rollback(DiscretizedAsset&,
                              Time to) const = 0;
        virtual void partialRollback(DiscretizedAsset&,
                                     Time to) const = 0;
        virtual Real presentValue(DiscretizedAsset&) const = 0;

        virtual Disposable<Array> grid(Time) const = 0;
      protected:
        TimeGrid t_;
    };
Its constructor takes a TimeGrid instance and stores it (its only concession to implementation inheritance, together with an inspector that returns the time grid). All other methods are pure virtual. The initialize method must set up a discretized asset so that it can be put on the lattice at a given time. (I do realize this is mostly hand-waving right now. It will become clear as soon as we get to a concrete lattice.) The rollback and partialRollback methods roll the asset backwards in time on the lattice down to the desired time (with a difference I'll explain later); and the presentValue method returns what its name says.

Finally, the grid method returns the values of the discretized quantity underlying the lattice. This is a bit of a smell. The information was required in other parts of the library, and we didn't have any better solution. However, this method has obvious shortcomings. On the one hand, it constrains the return type, which either leaks implementation or forces a type conversion; and on the other hand, it simply makes no sense when the lattice has more than one factor, since the grid should be a matrix or a cube in that case. In fact, two-factor lattices implement it by having it throw an exception. All in all, this method is up for some serious improvement in future versions of the library.

The DiscretizedAsset class is the base class for the other side of the tree framework—the Costello to Lattice's Abbott, as it were. It models an asset that can be priced on a lattice: it works hand in hand with the Lattice class to provide generic behavior, and has hooks that derived classes can use to add behavior specific to the instrument they implement.

As can be seen from the listing below, it's not nearly as abstract as the Lattice class. Most of its methods are concrete, with a few virtual ones that use the Template Method pattern to inject behavior.
    class DiscretizedAsset {
      public:
        DiscretizedAsset()
        : latestPreAdjustment_(QL_MAX_REAL),
          latestPostAdjustment_(QL_MAX_REAL) {}
        virtual ~DiscretizedAsset() {}

        Time time() const { return time_; }
        Time& time() { return time_; }
        const Array& values() const { return values_; }
        Array& values() { return values_; }
        const shared_ptr<Lattice>& method() const {
            return method_;
        }

        void initialize(const shared_ptr<Lattice>&,
                        Time t) {
            method_ = method;
            method_->initialize(*this, t);
        }
        void rollback(Time to) {
            method_->rollback(*this, to);
        }
        void partialRollback(Time to)  {
            method_->partialRollback(*this, to);
        }
        Real presentValue() {
            return method_->presentValue(*this);
        }

        virtual void reset(Size size) = 0;
        void preAdjustValues() {
            if (!close_enough(time(),latestPreAdjustment_)) {
                preAdjustValuesImpl();
                latestPreAdjustment_ = time();
            }
        }
        void postAdjustValues() {
            if (!close_enough(time(),latestPostAdjustment_)) {
                postAdjustValuesImpl();
                latestPostAdjustment_ = time();
            }
        }
        void adjustValues() {
            preAdjustValues();
            postAdjustValues();
        }

        virtual std::vector<Time> mandatoryTimes() const = 0;

      protected:
        bool isOnTime(Time t) const {
            const TimeGrid& grid = method()->timeGrid();
            return close_enough(grid[grid.index(t)],time());
        }
        virtual void preAdjustValuesImpl() {}
        virtual void postAdjustValuesImpl() {}

        Time time_;
        Time latestPreAdjustment_, latestPostAdjustment_;
        Array values_;

      private:
        shared_ptr<Lattice> method_;
    };
Its constructor takes no argument, but initializes a couple of internal variables. The main inspectors return the data comprising its state, namely, the time \( t \) of the lattice nodes currently occupied by the asset and its values on the same nodes; both inspectors give both read and write access to the data to allow the lattice implementation to modify them. A read-only inspector returns the lattice on which the asset is being priced.

The next bunch of methods implements the common behavior that is inherited by derived classes and provide the interface to be called by client code. The body of a tree-based engine will usually contain something like the following after instantiating the tree and the discretized asset:
    asset.initialize(lattice, T);
    asset.rollback(t0);
    results_.value = asset.presentValue();
The initialize method stores the lattice that will be used for pricing and sets the initial values of the asset (or rather its final values, since the time T passed at initialization is the maturity time); the rollback method rolls the asset backwards on the lattice until the time t0; and the presentValue method extracts the value of the asset as a single number.

The three method calls above seem simple, but their implementation triggers a good deal of behavior in both the asset and the lattice and involves most of the other methods of DiscretizedAsset as well as those of Lattice. The interplay between the two classes is not nearly as funny as Who's on First, but it's almost as complex to follow; thus, you might want to refer to the sequence diagram shown below.


The initialize method sets the calculation up by placing the asset on the lattice at its maturity time (the asset's, I mean) and preparing it for rollback. This means that on the one hand, the vector holding the asset values on each lattice node must be dimensioned correctly; and on the other hand, that it must be filled with the correct values. Like most of the DiscretizedAsset methods, initialize does this by delegating part of the actual work to the passed lattice; after storing it in the corresponding data member, it simply calls the lattice's initialize method passing the maturity time and the asset itself.

Now, the Lattice class doesn't implement initialize, which is left as purely virtual; but any sensible implementation in derived classes will do the dance shown in the sequence diagram. It might perform some housekeeping step of its own, not shown here; but first, it will determine the size of the lattice at the maturity time (that is, the number of nodes) probably by calling a corresponding method; then, it will store the time into the asset (as you remember, the asset provides read/write access to its state) and pass the size to the asset's reset method. The latter, implemented in derived classes, will resize the vector of values accordingly and fill it with instrument-specific values (for instance, a bond might store its redemption and the final coupon, if any).

Next is the rollback method. Again, it calls the corresponding method in the Lattice class, which performs the heavy-machinery work of stepping through the tree (or whatever kind of lattice it models). It reads the current time from the asset, so it knows on what nodes it is currently sitting; then, a close interplay begins.

The point is, the lattice can't simply roll the asset back to the desired time, since there might be all kind of events occurring: coupon payments, exercises, you name it. Therefore, it just rolls it back one short step, modifies the asset values accordingly—which includes both combining nodes and discounting—and then pauses to ask the asset if there's anything that needs to be done; that is, to call the asset's adjustValues method.

The adjustment is done in two steps, calling first the preAdjustValues and then the postAdjustValue method. This is done so that other assets have a chance to perform their own adjustment between the two; I'll show an example of this in a later post. Each of the two methods performs a bit of housekeeping (namely, they store the time of the latest adjustment so that the asset is not adjusted twice at the same time; this might happen when composing assets, and would obviously wreak havoc on the results) and then calls a virtual method (preAdjustValuesImpl or postAdjustValueImpl, respectively) which makes the instrument-specific adjustments.

When this is done, the ball is back in the lattice's field. The lattice rolls back another short step, and the whole thing repeats again and again until the assets reaches the required time.

Finally, the presentValue method returns, well, the present value of the asset. It is meant to be called by client code after rolling the asset back to the earliest interesting time—that is, to the time for which further rolling back to today's date only involves discounting and not adjustments of any kind: e.g., it might be the earliest exercise time of an option or the first coupon time of a bond. As usual, the asset delegates the calculation by passing itself to the lattice's presentValue method; a given lattice implementation might simply roll the asset back to the present time (if it isn't already there, of course) and read its value from the root node, whereas another lattice might be able to take a shortcut and calculate the present value as a function of the values on the nodes at the current time.

As usual, there's room for improvement. Some of the steps are left as an implicit requirement; for instance, the fact that the lattice's initialize method should call the asset's reset method, or that reset must resize the array stored in the asset (in hindsight, the array should be resized in the lattice's initialize method before calling reset. This would minimize repetition, since we'll write many more assets than lattices.) However, this is what we have at this time. In next post, we'll move on and build some assets.

Liked this post? Share it:

Monday, March 9, 2015

Implementing QuantLib available from Leanpub

Hello everybody.

Big news: Implementing QuantLib is available on Leanpub.



For those of you that are not familiar with Leanpub, it is a publishing platform for ebooks that specializes in lean publishing; that is, publishing books which are still in progress and can thus benefit from reader feedback while they're still being completed. When you buy a book in progress, you get the current version now and you'll get all further updates for free.

Long story short: if you are interested in an Epub or Mobi version of my book, go get it (you can pay as much as you want). The drafts of the PDF version remain available for free on this site.

Of course, I'll be very grateful for all feedback (that's the whole point...) I'm particularly curious to know it the ebook looks ok on different devices, and if code listings wrap correctly—meaning that they shouldn't wrap; it they do, try decreasing the font size.

All in all, I think this will make for an interesting experiment. If it goes well, The QuantLib Notebooks will be next.

Ok, that's all for this post. I'll be back soon with some news on my next course. Oh, and the March sale at QuantsHub continues.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

Liked this post? Share it:

Monday, March 2, 2015

QuantLib notebook: numerical Greeks calculation

Hello everybody.

Here is the screencast of a new QuantLib notebook (in case you're new to this, the whole series in on both YouTube and Vimeo; choose the one that suits you best). In this one, I show how quotes can be used for the numerical calculation of Greeks.

And so that I don't waste a good segue into self-promotion, this was also one of the notebooks that I used in the workshop I recorded for QuantsHub. I hear there's a March sale going on, so this might be a good time to have a look.

That's all for this week. Turn the video below to full screen, sit back and enjoy.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.




Liked this post? Share it:

Monday, February 16, 2015

A quick look at the 1.5 release

Welcome back.

Just a quick post to share some information on the latest QuantLib release (version 1.5, released this past Tuesday; grab it at this link if you haven't already) and to thank all the contributors that made it possible.

There were quite a few of them. A quick bit of git-fu shows that this release includes 566 commits by 16 authors (git shows 18, but two of them are different addresses for the same persons):


The actual contributors are more, though: a few people contributed patch files which were committed into the library by yours truly and thus don't show here. I have no way to retrieve all their names quickly, but you can find them in the list of changes for the 1.5 release. While I was compiling it, I also checked that the pull requests that made the release were tagged correctly, so it's also possible to search and display all 68 of them on GitHub: from that page you can drill down into any pull request that catches your interest and see what commits it contained, the code changes, and any discussion that went on before merging.

One final note: not including version 1.4.1, which only contained a bug fix, the previous version (QuantLib 1.4) was released in February 2014—one year ago. I think we should try and release more often. The content shouldn't be a problem, seeing how we already have 27 open pull requests as I write this (as well as a few very interesting projects that I'll describe in a future post).

I'll stop here for now. Thanks again to all those who contributed to the 1.5 release!

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

Liked this post? Share it:

Monday, February 9, 2015

Odds and ends: global settings

Hello everybody.

This week, a section from the book appendix; namely, the one on the infamous Settings class. Feedback would be particularly appreciated on this one.

Did I mention that the workshop "A Look at QuantLib Usage and Development" I recorded for Quants Hub last October is now available for purchase? Right, I did. But in case you missed it, you can go back and read my last post for links and details. Oh, and I'm not the only QuantLib developer that did that; Ferdinando has recorded one as well—although it has nothing to do with QuantLib...

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

Odds and ends: global settings

The Settings class, outlined in the listing below, is a singleton (I'll cover this pattern in a future post) that holds information global to the whole library.
    class Settings : public Singleton<Settings> {
      private:
        class DateProxy : public ObservableValue<Date> {
            DateProxy();
            operator Date() const;
            ...
        };
        ... // more implementation details
      public:
        DateProxy& evaluationDate();
        const DateProxy& evaluationDate() const;
        boost::optional<bool>& includeTodaysCashFlows();
        boost::optional<bool> includeTodaysCashFlows() const;
        ...
    };
Most of its data are flags that you can look up in the official documentation, or that you can simply live without; the one piece of information that you'll need to manage is the evaluation date, which defaults to today's date and is used for the pricing of instruments and the fixing of any other quantity.

This poses a challenge: instruments whose value can depend on the evaluation date must be notified when the latter changes. This is done by returning the corresponding information indirectly, namely, wrapped inside a proxy class; this can be seen from the signature of the relevant methods. The proxy inherits from the ObservableValue class template (outlined below) which is implicitly convertible to Observable and overloads the assignment operator in order to notify any changes. Finally, it allows automatic conversion of the proxy class to the wrapped value.
    template <class T>
    class ObservableValue {
      public:
        // initialization and assignment
        ObservableValue(const T& t)
        : value(t), observable_(new Observable) {}
        ObservableValue<T>& operator=(const T& t)  {
          value_ = t;
          observable_->notifyObservers();
          return *this;
        }
        // implicit conversions
        operator T() const { return value_; }
        operator boost::shared_ptr<Observable>() const {
          return observable_;
        }
      private:
        T value_;
        boost::shared_ptr<Observable> observable_;
    };
This allows one to use the facility with a natural syntax. On the one hand, it is possible for an observer to register with the evaluation date, as in:
    registerWith(Settings::instance().evaluationDate());
on the other hand, it is possible to use the returned value just like a Date instance, as in:
    Date d2 = calendar.adjust(Settings::instance().evaluationDate());
which triggers an automatic conversion; and on the gripping hand, a simple assignment syntax can be used for setting the evaluation date, as in:
    Settings::instance().evaluationDate() = d;
which will cause all observers to be notified of the date change.


Of course, the elephant in the room is the fact that we have a global evaluation date at all. The obvious drawback is that one can't perform two parallel calculations with two different evaluation dates, at least in the default library configuration; but while this is true, it is also a kind of red herring. On the one hand, there's a compilation flag that allows a program to have one distinct \code{Settings} instance per thread (with a bit of work on the part of the user) but as we'll see, this doesn't solve all the issues. On the other hand, the global data may cause unpleasantness even in a single-threaded program: even if one wanted to evaluate just an instrument on a different date, the change will trigger recalculation for every other instrument in the system when the evaluation date is set back to its original value.

This clearly points (that is, quite a few smart people had the same idea when we talked about it) to some kind of context class that should replace the global settings. But how would one select a context for any given calculation?

It would be appealing to add a setContext method to the Instrument class, and to arrange things so that during calculation the instrument propagates the context to its engine and in turn to any term structures that need it. However, I don't think this can be implemented easily.

First, the instrument and its engine are not always aware of all the term structures that are involved in the calculation. For instance, a swap contains a number of coupons, any of which might or might not reference a forecast curve. We're not going to reach them unless we add the relevant machinery to all the classes involved. I'm not sure that we want to set a context to a coupon.

Second, and more important, setting the context for an engine would be a mutating operation. Leaving it to the instrument during calculations would execute it at some point during the call to its NPV method, which is supposed to be const. This would make it way too easy to trigger a race condition; for instance with a harmless-looking operation such as using the same discount curve for two instruments and evaluating them at different dates. A user with a minimum of experience in parallel programming wouldn't dream of, say, relinking the same handle in two concurrent threads; but when the mutation is hidden inside a const method, she might not be aware of it. (But wait, you say. Aren't there other mutating operations possibly being done during the call to NPV? Good catch: see the aside at the end of this post.)

So it seems that we have to set up the context before starting the calculation. This rules out driving the whole thing from the instrument (because, again, we would be hiding the fact that setting a context to an instrument could undo the work done by another that shared a term structure with the first) and suggests that we'd have to set the context explicitly on the several term structures. On the plus side, we no longer run the risk of a race in which we unknowingly try to set the same context to the same object. The drawbacks are that our setup just got more complex, and that we'd have to duplicate curves if we want to use them concurrently in different contexts: two parallel calculations on different dates would mean, for instance, two copies of the overnight curve for discounting. And if we have to do this, we might as well manage with per-thread singletons.

Finally, I'm skipping over the scenario in which the context is passed but not saved. It would lead to method calls like
    termStructure->discount(t, context);
which would completely break caching, would cause discomfort to all parties involved, and if we wanted stuff like this we'd write in Haskell.

To summarize: I hate to close the section on a gloomy note, but all is not well. The global settings are a limitation, but I don't have a solution; and what's worse, the possible changes increase complexity. We would not only tell a first-time user looking for the Black-Scholes formula that she needs term structures, quotes, an instrument and an engine: we'd also put contexts in the mix. A little help here?

Aside: more mutations than in a B-movie.

Unfortunately, there are already a number of things that change during a call to the supposedly const method Instrument::NPV.

To begin with, there are the arguments and results structures inside the engine, which are read and written during calculation and thus prevent the same engine to be used concurrently for different instruments. This might be fixed by adding a lock to the engine (which would serialize the calculations) or by changing the interface so that the engine's calculate method takes the arguments structure as a parameter and returns the results structure.

Then, there are the mutable data members of the instrument itself, which are written at the end of the calculation. Whether this is a problem depends on the kind of calculations one's doing. I suppose that calculating the value of the instrument twice in concurrent threads might just result in the same values being written twice.

The last one that comes to mind is a hidden mutation, and it's probably the most dangerous. Trying to use a term structure during the calculation might trigger its bootstrap, and two concurrent ones would trash each other's calculations. Due to the recursive nature of the bootstrap, I'm not even sure how we could add a lock around it. So if you do decide to perform concurrent calculations (being careful, setting up everything beforehand and using the same evaluation date) be sure to trigger a full bootstrap of your curves before starting.

Liked this post? Share it:

Monday, January 19, 2015

Quants Hub workshop available

Welcome back, and happy new year.

Just a short post with some news and self-promotion. The holiday season* didn't just bring a new QuantLib blog from Peter Caspers—whom you want to follow, trust me on this one—but also the publication online of the workshop I recorded for Quants Hub last October. It's now available for purchase from the Quants Hub site, under the title "A Look at QuantLib Usage and Development". It may be an option for those of you that can't attend my courses in London (because there's an ocean in the middle, for instance).

But I think I'll let the explanations to the guy below (he seems to know what he's talking about, even though he's got a funny accent). Click on the image to open the workshop page, read a description of the contents, and see the first 20 minutes or so of the recording.




* As to the actual bearer of these gifts, there are different opinions. If you asked around in Italy, the answer would be Santa or the baby Jesus or St. Lucia, depending on the region.

Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the widgets for that are in the sidebar, at the top right of the page. Also, make sure to check my Training page.

Liked this post? Share it: